+ All Categories
Home > Documents > Whom Do Bureaucrats Believe? A Randomized Controlled Experiment Testing Perceptions of Credibility...

Whom Do Bureaucrats Believe? A Randomized Controlled Experiment Testing Perceptions of Credibility...

Date post: 02-Dec-2023
Category:
Upload: ubc
View: 0 times
Download: 0 times
Share this document with a friend
22
Whom Do Bureaucrats Believe? A Randomized Controlled Experiment Testing Perceptions of Credibility of Policy Research Carey Doberstein More than ever before, analysts in government have access to policy-relevant research and advocacy, which they consume and apply in their role in the policy process. Academics have historically occupied a privileged position of authority and legitimacy, but some argue this is changing with the rapid growth of think tanks and research-based advocacy organizations. This article documents the findings from a randomized controlled survey experiment using policy analysts from the British Columbia provincial government in Canada to systematically test the source effects of policy research in two subject areas: minimum wage and income-splitting tax policy. Subjects were asked to read research summaries of these topics and then assess the credibility of each article, but for half of the survey respondents the affiliation/ authorship of the content was randomly reassigned. The experimental findings lend evidence to the hypothesis that academic research is perceived to be substantially more credible than think tank or advocacy organization research, regardless of its content. That increasingly externalized policy advice systems are not a pluralistic arena of policy research and advice, but instead subject to powerful heuristics that bureaucrats use to sift through policy-relevant information and advice, demands added nuance to both location and content-based policy advisory system models. KEY WORDS: expert credibility, policy advisory systems, social science experimentation Introduction More than ever before, analysts and decision makers in government are supplied with policy-relevant research and advocacy from nongovernmental actors in aca- demia, think tanks, and research-based advocacy organizations, which they con- sume, interpret, and apply as actors in various roles in the policy process (Dobuzinskis, Howlett, & Laycock, 2007; Eichbaum & Shaw, 2007; Heinrichs, 2005; Maley, 2000; Renn, 1995). It may be hard to believe, but over 30 years ago Doern and Phidd (1983) were worried that policymakers might be overwhelmed from the expan- sive policy research conducted and injected into the public domain to influence ongoing debates. Since then, applied public policy research has “enormously expanded and differentiated” (Heinrichs, 2005, p. 41), such that ever more actors 1 0190-292X V C 2016 Policy Studies Organization Published by Wiley Periodicals, Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ. The Policy Studies Journal, Vol. 00, No. 00, 2016
Transcript

Whom Do Bureaucrats Believe? A Randomized

Controlled Experiment Testing Perceptions of

Credibility of Policy Research

Carey Doberstein

More than ever before, analysts in government have access to policy-relevant research and advocacy,

which they consume and apply in their role in the policy process. Academics have historically occupied a

privileged position of authority and legitimacy, but some argue this is changing with the rapid growth of

think tanks and research-based advocacy organizations. This article documents the findings from a

randomized controlled survey experiment using policy analysts from the British Columbia provincial

government in Canada to systematically test the source effects of policy research in two subject areas:

minimum wage and income-splitting tax policy. Subjects were asked to read research summaries of these

topics and then assess the credibility of each article, but for half of the survey respondents the affiliation/

authorship of the content was randomly reassigned. The experimental findings lend evidence to the

hypothesis that academic research is perceived to be substantially more credible than think tank or

advocacy organization research, regardless of its content. That increasingly externalized policy advice

systems are not a pluralistic arena of policy research and advice, but instead subject to powerful

heuristics that bureaucrats use to sift through policy-relevant information and advice, demands added

nuance to both location and content-based policy advisory system models.

KEY WORDS: expert credibility, policy advisory systems, social science experimentation

Introduction

More than ever before, analysts and decision makers in government are supplied

with policy-relevant research and advocacy from nongovernmental actors in aca-

demia, think tanks, and research-based advocacy organizations, which they con-

sume, interpret, and apply as actors in various roles in the policy process

(Dobuzinskis, Howlett, & Laycock, 2007; Eichbaum & Shaw, 2007; Heinrichs, 2005;

Maley, 2000; Renn, 1995). It may be hard to believe, but over 30 years ago Doern and

Phidd (1983) were worried that policymakers might be overwhelmed from the expan-

sive policy research conducted and injected into the public domain to influence

ongoing debates. Since then, applied public policy research has “enormously

expanded and differentiated” (Heinrichs, 2005, p. 41), such that ever more actors

1

0190-292X VC 2016 Policy Studies Organization

Published by Wiley Periodicals, Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ.

The Policy Studies Journal, Vol. 00, No. 00, 2016

bs_bs_banner

proffer expertise and legitimacy to influence policy development, and constitute

what is broadly defined as the policy advisory system.

Downs (1957, referenced in Lachapelle, Montpetit, & Gauvin, 2014) long ago

claimed that citizens and policymakers “rely on information and analysis provided

by agents with legitimate claims to expertise in a given issue area” (p. 674). Yet,

recent evidence suggests that the influence of experts on the policy process and deci-

sion making depends quite critically on perceptions of the credibility of the source of

that expertise (Lachapelle et al., 2014). Academics have historically occupied a privi-

leged position of authority and legitimacy in the public domain as it relates to policy

research, but some argue that this is changing with the growth of think tanks and

research-based advocacy groups (Stone, 1996, 2007). Earlier conceptualizations of

think tanks and research-based advocacy organizations suggested they were the

“bridge” between the policymaker and the academic, often merely translating

“research into easy-to-read form for policymakers to absorb” (McGann & Johnson,

2005, p. 12). Yet, recent scholarship suggests that the bridge metaphor ought to be

replaced by an analytical focus on the competition of ideas and for influence, as the mag-

nitude and sophistication of non-university–derived research and policy advice con-

tinues to grow (Denham & Garnett, 2006; Stone, 2007; Weingart, 1999; Wells, 2012).

Some policy researchers and organizations are outwardly ideological or partisan,

while others remain technocratic or scholarly in character, which may affect—fairly

or unfairly—how they are assessed in terms of credibility. Indeed, some academics

have warned of the questionable rigor of some think tank or advocacy organization

research, given the different norms within the sphere of the university—in particular

rigorous research training and blind peer review—in contrast to the public relations

universe to which these other organizations typically belong (Ceccarelli, 2011; Jac-

ques, Dunlap, & Freeman, 2008), while others worry about “self-destructive and de-

legitimizing” dynamics with the inflationary use of scientific advice in the policy

process, as policymakers confront conflicting scientific advice (Weingart, 1999).

Nevertheless there are certainly no universal conclusions one can make regarding

the credibility or quality of research in any of the spheres, whether in universities,

think tanks, or advocacy organizations.

Yet, what is an important consideration is how the policy research emerging from

these various spheres and institutions is perceived in terms of credibility by an impor-

tant—yet rarely tested—subset of consumers of that research: the policy analysts who

are often charged with interpreting and applying the research findings from a variety

of sources in their policy analysis and briefings that support government planning

and decision making within the policy advisory system (Hafner-Burton, Hughes, &

Victor, 2013). This article therefore asks the following question in the context of Can-

ada: Do analysts in government have systematically different perceptions of the credi-

bility of policy research derived from different sources? This will help us formulate

key building blocks that relate to the nature of the influence of academics, think tanks,

and research-based advocacy groups within increasingly externalized policy advisory

systems, which remains highly disputed in the literature (Pal, 2010).

This article aims to address this gap in our understanding by presenting the

results of a randomized controlled survey experiment that exposed policy analysts

2 Policy Studies Journal, 00:00

to policy research with the same set of research content and conclusions, but altered

the attribution (i.e., authorship) of the research for one group as a means to separate

out the source and content effects of the policy research. The article proceeds as fol-

lows. First, I review the literature on how experts and their policy research is per-

ceived and interpreted by policy analysts to reveal that while much work has been

done to understand the nature of their credibility and influence in policy advisory sys-

tems, fundamental questions regarding the nature and the magnitude of source

effects remain unanswered. Second, the experiment conducted in this study is

described and the results are presented, demonstrating that there are substantial and

statistically significant shifts in the assessments of the credibility of the policy research

when only the attribution of the author and affiliation is randomly changed. The final

section considers the implications of these findings on our conceptualization of exter-

nalization of policy advice trends, as well as the weight of the credibility heuristic,

and identifies future research opportunities to build on the tentative findings

observed in this study.

What Do We Know About the Influence of Policy Research?

Policymaking has always relied on expertise and scientific knowledge, but many

have argued that the interaction and engagement between experts and policymakers

is ever increasing in importance due to the complexity of knowledge required to

address contemporary public policy problems (Doberstein, 2016; Heinrichs, 2005;

Majchrzak, 1986; Renn, 1995). Various external sources of policy advice have been

found to be significant sources of substantive policy advisory content used by policy-

makers to support existing policy positions or as sources of policy change (Bertelli &

Wenger, 2009; McGann & Sabatini, 2011). Previous research in diverse policy areas

suggests that policymakers draw on numerous and diverse sources of information

and expertise, including from professional or academic journals, think tanks, and

interest group reports (Carter, 2013; Levin, 1991; Light & Newman, 1992; Majchrzak,

1986; Negev & Teschner, 2013; Weible, 2008).

Expert information is defined as content generated by professional, scientific,

and technical methods of inquiry, and may be used for learning, political, or instru-

mental purposes, and constitute the broader policy advisory system (Weible, 2008).

Policymakers rely on experts because of the need to integrate such different types of

knowledge and claims, and the relative influence of experts will depend on the issue

(technical vs. moral), and what some have called “national styles” or cultures of

expert legitimacy or involvement in policymaking (Renn, 1995; see also Lindvall,

2009). In the Canadian context, in which this study is situated, the role of experts

and expertise fits the adversarial model, which is characterized by a competitive con-

text of information and knowledge exchange.

Most existing conceptual models of policy advice systems associate different

levels of influence with the location of advisors either inside or outside government

(Halligan, 1995). These “location-based” models have served as the starting point for

investigations into the roles played by think tanks (Lindquist, 1998), public managers

Doberstein: Perceptions of Credibility of Policy Research 3

(Howlett, 2011), and others in policy formulation (Wilson, 2006). But Craft and Howl-

ett (2012) theorize that it is not just the source of the policy advice that will shape its

influence, but also the content of advice, as more (and different) actors have entered

into the policy advice system. Whereas Craft and Howlett (2013, p. 190) have focused

recently on theorizing the “content dimension” to location-based models of policy

advisory systems, much remains unclear about the other side of the coin: the nature

and magnitude of source effects in the context of expanding policy advisory systems.

An exploration of the nature and magnitude of source effects in this context ought

to begin with a recognition of bounded rationality among policy analysts, and in par-

ticular the use of shortcuts, or heuristics, as part of their policy analysis and interpreta-

tion of applied policy research (Simon, 2000). The nature and use of such heuristics do

not emerge from thin air, but are informed from a wide range of factors, including

norms of the professional discipline, previous experience, and institutional practices.

In this context, research has found that trust in scientific (academic) evidence is high

and widespread (Mooney & Schuldt, 2008), legitimized from popular perceptions

that science is neutral, dispassionate, and therefore credible (Ozawa, 1991).1

In this context, credibility is defined as “an individual’s assessment of the believ-

ability of an argument,” and is a function of the evidence presented, the logic of the

argument, as well as other factors that may have little to do with the rigor or logic of

the inquiry, such as cues associated with authority and credentials (Landsbergen &

Bozeman, 1987). Indeed, in an early experimental study, Landsbergen and Bozeman

(1987) found that the logic of an argument had a much smaller correlation to an indi-

vidual accepting policy advice than experience and reputation did, suggestive of a

credibility heuristic in use. Perceptions of status, authority, and expertise can there-

fore be considered filters to make information processing easier for policymakers.

As such, academics have historically been afforded a privileged position of author-

ity and legitimacy as it relates to expertise and policy research, but some argue that

this is changing with the growth of think tanks (Stone, 1996, 2007) and, more recently,

research arms of interest or advocacy groups (Rich, 2005). One of the most prominent

think tanks in Canada, the Fraser Institute, routinely makes strong claims about influ-

ence and “high impact,” for example in their most recent annual report, citing over

26,000 media mentions of the organization in the past year (Fraser Institute, 2014) and

that “on many occasions, too frequent to be quoted here, the Institute’s policy recom-

mendations have been adopted by Canadian governments” (Fraser Institute, 2004, p.

2). Yet the influence of policy-relevant research derived from think tanks and policy

institutes, arguably the most prominent sources of nongovernmental policy advice, is

highly disputed, with most observers concluding either that they “rarely do” (Pal,

2010; see also Lindquist, 1993) or that it is simply too complicated or contingent on the

policy issue to say in any general terms (Abelson, 2009; Conford, 1990). In the Austra-

lian context, Head, Ferguson, Cherney, and Boreham (2014) found that fewer than half

of professional policy analysts surveyed at multiple levels of government viewed think

tanks as important sources of knowledge information assisting decision making in

their unit—whereas 70 percent suggested university researchers were important. Still

others contend that “dollar for dollar, think tanks attract greater attention than most

any other organizational source of expertise” (quoted in Weidenbaum, 2010, p. 135).

4 Policy Studies Journal, 00:00

Indeed, in surveys of government officials in the United States, Rich (2005) finds

that think tanks are viewed by large majorities as “somewhat influential,” although

their credibility is shaped by their ideological character and behavior in terms of

marketing and means of distribution of the policy research and findings. Further,

Rich (2005) found that the most credible think tanks are those with the least identifia-

ble ideological orientation and those that are not marketing-oriented—that is, pack-

aged for easy consumption—and are often more likely to be grouped with university

scholars in terms of (higher) credibility. The research products of interest or advo-

cacy groups are, by large majorities (75 percent), viewed as less credible than think

tanks, which in part may explain, according to Rich (2005), the rapid growth of think

tanks because their funders believe decision makers would pay more attention to

them than lobbyists. Yet, there is contrasting evidence in the earlier literature that

suggests that academic research was utilized less and viewed as less helpful to gov-

ernment officials than nonscholarly sources of information such as those research

products from advocacy groups (Light & Newman, 1992). Thus, we ought to care

about credibility because it is an essential factor and building block toward better

understanding and measuring the much more complicated question of influence

(Lindvall, 2009) and research utilization (Weiss, 1979) in external policy advisory

systems.

While there may not be agreement across previous studies on how applied

policy research is perceived and used within government, some worry about the

broader effects of the explosion of applied policy research from outside the acad-

emy. Heinrichs (2005) argues that the expansion of scientific and trans-scientific evi-

dence—defined as expertise related to political problems—has “weakened scientific

authority and its legitimation function” (p. 41). This may be in part due to a (wel-

comed) recognition among policymakers that scientific/academic evidence can

ever be devoid of political values (Blyth, 2002), but may also stem from the fact

that policy-relevant scientific knowledge is increasingly placed in ideological or

explicitly political frames by trans-scientific think tanks and the research arms of

advocacy organizations (Heinrichs, 2005; Jasanoff, 1990; Wesselink, Buchanan, Geor-

giadou, & Turnhout, 2013). When scientific knowledge is linked to “interests (in pol-

icymaking), it [can be] evaluated as supportive, contradictory or even dangerous”

(Weingart, 1999, p. 156). Thus we see the emergence of “report wars” (Wesselink

et al., 2013) amid the widespread perception in government circles that “you can’t

really play the policy game unless you have a study” (quoted by Rich, 2005, p. 74).

In the context of this competitive space of experts—from academia, think tanks,

or advocacy organizations—in the policymaking process, it becomes important to

measure and compare how they are perceived in terms of credibility by those who

draw on their expertise, knowledge, and arguments as part of the broader policy

advisory system. This study therefore tests the following two hypotheses through a

randomized controlled experimental design:

1. Policy research attributed to university academics is viewed as more credible

than think tank and advocacy group policy research, regardless of the actual

content of the research product.

Doberstein: Perceptions of Credibility of Policy Research 5

2. Policy research attributed to sources that are less ideologically oriented is

viewed as more credible than those that are more ideologically oriented, regard-

less of the actual content of the research product.

The above hypotheses are informed from the previous studies articulated above

that have discovered the use of credibility heuristics by policy actors Cottam, Mas-

tors, Preston, and Dietz (2010), and suggestions that academic research may occupy

a privileged space in this domain largely stemming from a perception of neutrality,

rigorous research training, and blind peer review protocols (Ceccarelli, 2011; Jacques

et al., 2008; Ozawa, 1991; Rich, 2005), although one increasingly challenged in terms

of legitimacy by other institutions and sources of policy research, particularly those

nonscholarly research organizations that are not explicitly ideological (Hart & Vro-

men, 2008; Rich, 2005; Sherrington, 2000; Stone, 1996; Weidenbaum, 2010). Discover-

ing systematic differences in credibility across types of sources would reveal a

powerful heuristic at play that connects to recent work on policy advisory systems,

and in particular provide nuance to our understanding of the response to broader

externalization dynamics among critical gatekeepers in the policy process (Craft &

Howlett, 2013).

Experimental Method

How do we estimate how policy research is consumed and interpreted by ana-

lysts in government? Existing research has tended to estimate the influence of policy

researchers or organizations by analyzing document downloads from websites or

citations of research in media reports, or simply asking government officials in sur-

veys and interviews (Abelson, 2009; McNutt & Marchildon, 2009; Rich & Weaver,

2000), which are helpful in broad terms, but demand more precision with respect to

how the research is interpreted and processed by one of its key audiences: policy

analysts in government. This study confronts the estimation problems by devising

and conducting a randomized controlled experiment to systematically test the source

effects of policy research, while holding the content fixed. An experimental study is

uniquely positioned to leverage earlier observational findings in this literature

toward a more concrete and refined understanding of how policy research is inter-

preted within government, and thus how it fits within the broader policy advisory

system.

The experiment is designed as follows. First, two policy issues were selected:

minimum-wage policy and income-splitting tax policy, both issue areas that have

heightened salience, certainly in Canadian politics, but indeed elsewhere, in recent

years.2 Both policy issues are concerned primarily with policy instruments, rather

than policy goals, which much of the literature suggests is where experts are likely

to be influential (Hall, 1993; Lindvall, 2009). Once the two issue areas were selected,

policy research situated in a Canadian context was scanned and assembled from aca-

demic, think tank, and advocacy group sources written in the past five years. For

minimum-wage policy and income-splitting tax policy, 22 articles were identified

and ultimately reduced to five and six, respectively, with the aim of presenting a

6 Policy Studies Journal, 00:00

diversity of analysis and conclusions.3 The primary criteria for selecting the final list

of articles and their authors and organizations were: the prominence of the organiza-

tion/institution, the currentness of the research, and the diversity of argument and

conclusions.4

Table A1 in the Appendix provides the reference information for the articles and

their sources that are used in the study. Asking subjects to read and assess five or six

full research articles would represent an extraordinary burden and would almost

certainly lead to large-scale attrition from the survey, so instead the conclusions for

each of the studies were excerpted and were presented to subjects as “research sum-

maries.”5 While there is some risk that presenting limited content to subjects may

incentivize them to rely on source cues and thus overstate source effects, this poten-

tial bias will affect both control and treatment groups equally (and thus cancel out),

and was viewed as essential to avoid the larger problem of significant attrition from

the survey.

The academic sources used in the experimental survey are from the University of

Toronto (UofT), University of Calgary (UofC), and York University, all major research

universities in Canada. Within the public consciousness, few could easily attach an

ideological label to UofT, whereas some identify UofC with a more right-wing reputa-

tion, and York University with a more left-wing reputation. The think tanks tested in

the study are among the most prominent in the country, and vary according to ideo-

logical leanings, with the Fraser Institute the most outwardly ideological on the right,

followed by the CD Howe Institute right of center, and the Broadbent Institute and

the Canadian Centre for Policy Alternatives (CCPA) the most outwardly ideological

on the left. Among research-based advocacy groups, the Wellesley Institute and the

Canadian Labour Congress are associated with social justice orientations, whereas the

Canadian Federation of Independent Business and the Institute on Marriage and Fam-

ily Canada are associated with more conservative advocacy positions.

This study is characteristic of a split-ballot experiment where groups of subjects

to a survey receive different versions of the survey on a random assignment basis

(Van de Walle & Van Ryzin, 2011). The subjects of this experiment were public sector

policy analysts in the British Columbia provincial government, invited to participate

in the survey that would ask them to read and assess policy research from a variety

of sources, although were not made aware that it was experimental in nature until

all surveys had been completed. The sample of policy analysts is devised in response

to calls from social science experimentalists to move away from testing convenience

samples of students or citizens to testing political and bureaucratic elites themselves,

as they respond quite differently (Hafner-Burton et al., 2013).6 Subjects were

recruited to participate in the study if they met the following criteria: a job title of

analyst, advisor, research officer, planner, economist, manager, or director. The crite-

ria was selected to include those public servants who are most likely to encounter

research as part of their job, and thus excludes operational public servants and

administrative support staff.7

The recruitment criteria identified 1,108 subjects from the online BC Government

directory (https://dir.gov.bc.ca) from all ministries and departments. The 1,108

potential subjects were randomly assigned to one of the four groups: Minimum

Doberstein: Perceptions of Credibility of Policy Research 7

Wage (control), Minimum Wage (treatment), Income Splitting (control), and Income

Splitting (treatment), using random number generating software. The survey was

sent via email to all potential subjects and was completed online. The response rate

across all groups averaged 23.3 percent, for total sample size of completed surveys

of 258.8 All subjects were first asked questions relating to demographics, their

engagement with research as part of their job, as well as their personal view of the

respective policy issue, as means to test the randomization procedure for balance

across groups, as well as for subsequent theoretically informed covariate analysis

that may shape a subject’s response to policy research from different sources.

The control groups then received the research summaries with correct attribu-

tion of source/affiliation and the treatment groups were exposed to the same

research summaries, but with one of the alternative (false) affiliations used in the

study. For example, for the subjects in minimum wage policy group, those in the

control group received the University of Toronto article with the correct label,

whereas those in the experimental group received the University of Toronto article

content but attributed to the Fraser Institute (ideologically right-wing think tank). To

develop the treatments, the researcher assigned new (false) authorship labels to the

five research article summaries in the minimum wage policy experiment and six

articles in the income splitting policy experiment.9 The order in which the research

summaries were presented to each respondent was randomized. Table 1 outlines the

control and treatment conditions for both minimum wage and income splitting

groups of subjects.

The primary outcome measure is the credibility assessment of each research arti-

cle summary by subjects in both control and treatment groups. This outcome is

measured twice: the first time is immediately after reading the respective research

article summary (a multiple choice ordinal variable: very credible, somewhat credi-

ble, somewhat not credible, not credible at all; with no option for neutral or other),

and the second time credibility is assessed is after all research article summaries

have been read, when at such time the subject is asked to rank the articles against

Table 1. Control and Treatment Conditions for Minimum Wage and Income Splitting Groups ofSubjects

Minimum Wage Income Splitting

Control Treatment Control Treatment

Label Content Label Content Label Content Label Content

UofT UofT CCPA UofT UofC UofC CLC UofCFraser

InstituteFraser

InstituteUofT Fraser

InstituteYorkU YorkU Broadbent

InstituteYorkU

CCPA CCPA CFIB CCPA BroadbentInstitute

BroadbentInstitute

UofC BroadbentInstitute

WellesleyInstitute

WellesleyInstitute

FraserInstitute

WellesleyInstitute

CD HoweInstitute

CD HoweInstitute

IMFC CD HoweInstitute

CFIB CFIB WellesleyInstitute

CFIB CLC CLC CD HoweInstitute

CLC

IMFC IMFC YorkU IMFC

CFIB: Canadian Federation of Independent Business; CLC: Canadian Labour Congress; IMFC: Institutefor Marriage and Family Canada.

8 Policy Studies Journal, 00:00

each other (subjects were presented with a pull-down menu of ranking numbers; no

option for ties). By comparing the responses from the control and experimental

groups in relation to each other, as well as covariates, using ordinal logistic regres-

sion, we are able to identify and systematically differentiate how the source of that

research shapes their perceptions of the credibility of the analysis and findings.

Summary of Experimental Findings

First, randomization inference was conducted to determine whether the

observed covariates are balanced across the treatment and control groups by regress-

ing the treatment condition against the covariates. The only covariate imbalance

within the range of 10 percent statistical significance was on the pre-test question for

the subjects assigned to the control condition in the income-splitting group. Those

subjects who answered that an income-splitting tax policy is “Good” (very good and

somewhat good, collapsed into one variable) were less likely to be in the treatment

group. Note that while covariate imbalance is not itself a sign of improper random-

ization—and is controlled for in subsequent regression analysis—it is useful to

inspect to give the researcher confidence that the randomization procedure was exe-

cuted as intended, which Table A2 in the Appendix confirms.

First I present the outcome data from the first measure used—the independent

credibility assessment of the policy research—beginning with the minimum-wage

subgroup. The most appropriate test for treatment effects for the first measure is a

proportional-odds model, which can easily incorporate the covariates collected and

examined in the experiment, as well as provide easily understood metrics of the

scale of treatment effects.10 Recall that the control group is presented article content

with accurate attribution, whereas the experimental group is presented the same arti-

cle content but with false attribution (e.g., Fraser Institute think tank content attrib-

uted to UofT, etc.). Examining the outcome data suggests at times dramatic effects of

the treatment, beginning first with the minimum-wage subgroups.

Tables A3 and A4 in the Appendix present the full results of the ordinal logistic

regression for the minimum-wage and income-splitting subgroups, respectively, but

the measured treatment effects are transformed into easily interpretable results in

Table 2.

There are two dramatic and statistically significant results in the minimum-wage

subgroup. For the UofT article, when exposed to the treatment (that research given a

CCPA—ideologically left wing—think tank label), there is a 68 percent decrease in

the odds of selecting a higher credibility category for those subjects in the treatment

conditions over the control conditions, with the other variables held constant in the

model.11 The UofT article was thus viewed as highly credible to the control group

(accurately attributed), yet in the experimental group when that same article content

had different attribution, its credibility plummeted. Conversely, when exposed to

treatment conditions (the content label as from UofT) the Fraser Institute—ideologi-

cally right-wing—think-tank article experienced a 292 percent increase in the odds of

selecting a higher credibility category for those subjects in the treatment conditions

Doberstein: Perceptions of Credibility of Policy Research 9

(different source attribution) over the control conditions (correct source attribution).

The Fraser Institute think-tank article thus saw a dramatic increase in credibility

when its content was attributed to University of Toronto academics. Note that there

are no systematic and statistically significant correlations among the covariates

tested, including gender, education (graduate degree or no graduate degree), job

(analyst vs. manager), and the pretest question of their opinion on the respective pol-

icy issue.

For the income-splitting subgroups, we likewise see some dramatic and statisti-

cally significant treatment effects. The CLC advocacy group saw a 146 percent

increase in the odds of selecting a higher credibility category for those subjects in the

treatment conditions (attributed to the CD Howe—centrist—think tank) over the

control conditions (its own label). The Canadian Labour Congress (CLC) article was

thus substantially more credible to the experimental group, yet in the control group

when that same article content had the correct attribution, its credibility was assessed

as quite low. Even more pronounced in terms of treatment effects, the IMFC (con-

servative) advocacy group article experienced a 344 percent increase in the odds of

selecting a higher credibility category for those subjects in the treatment conditions

(York University label) over the control conditions. Indeed, the average credibility

score of the second advocacy group article (IMFC) was actually negative in the con-

trol group, but increased substantially to a positive score for the experimental group

when presented as from York University.

Note that across both minimum-wage and income-splitting groups, the largest

variation between the control and experimental groups appears associated with

those organizations with the strongest ideological labels—both left and right—

including the Fraser Institute and CCPA think tanks, as well as the Canadian Labour

Congress (CLC) and the Institute on Marriage and Family Canada (IMFC) research-

based advocacy organizations. Further, content attributed to academics was always

ranked higher in terms of credibility than any other source label, although not

always reaching statistical significance.

Recall that after all research articles were read and individually assessed by sub-

jects, they were asked to rank them against each other in terms of credibility. This is

Table 2. Ordinal Logistic Regression Results for Both Policy Subgroups

Minimum-Wage Policy Income-Splitting Policy

Label ContentTreatment

Effect Label ContentTreatment

Effect

CCPA UofT 268%*** CLC UofC 221%UofT Fraser Institute 1292%*** Broadbent

InstituteYorkU 216%

CFIB CCPA 244% UofC BroadbentInstitute

139%

Fraser Institute WellesleyInstitute

236% IMFC CD HoweInstitute

240%

WellesleyInstitute

CFIB 132% CD HoweInstitute

CLC 1146%***

YorkU IMFC 1344%***

***p< 0.01, **p< 0.05, *p< 0.10.

10 Policy Studies Journal, 00:00

an important additional measure of the dependent variable, as it forces subjects to

explicitly choose an ordered credibility list of the articles presented to them, rather

than merely an independent assessment of credibility for each. As with the other

measure of the dependent variable, we can test treatment effects using a

proportional-odds model, which can incorporate the covariates collected in the

study, as well as provide easily understood metrics of the scale of treatment effects.

Tables A5 and A6 in the Appendix present the full results of the ordinal logistic

regression for the minimum-wage and income-splitting subgroups for this additional

measure of the dependent variable, respectively, but the measured treatment effects are

transformed into easily interpretable results in Table 3. The results reinforce the findings

above in terms of discovering at times dramatic treatment effects, thus signifying strong

source effects of credibility, consistent with the hypotheses articulated above.

A preliminary note about interpreting the data in Table 3: the direction of the

effects appears reversed from the results of the first measure simply because when

measuring the “rank” of an article, a lower score is better (e.g., Rank 1 is better than

Rank 3). There are multiple instances of large and statistically significant results in

the minimum wage subgroup. For the UofT article, when exposed to the treatment

(that research given a CCPA—ideologically left-wing—think-tank label), there is a

180 percent increase in the odds of selecting a worse rank for those subjects in the

treatment conditions over the control conditions (its own label). The same is true for

the CCPA think-tank article, where there is a 248 percent increase in the odds of

selecting a worse rank for those subjects in the treatment conditions (CFIB advocacy

group label) over the control conditions. Conversely, for the Fraser Institute—ideo-

logically right-wing—think tank, there is a 68 percent decrease in the odds of select-

ing a worse rank for those subjects in the treatment conditions (UofT academic label)

over the control conditions. That is, subjects are more likely to select a better rank for

the Fraser Institute in the experimental group, when that research content is attrib-

uted to University of Toronto academics. Similarly, for the CFIB (conservative) advo-

cacy group article there is a 76 percent decrease in the odds of selecting a worse rank

for those subjects in the treatment conditions (Wellesley Institute, social justice-

oriented advocacy group) over the control conditions.

Table 3. Ordinal Logistic Regression Results on Ranked Outcome Measure for Both Policy Subgroups

Minimum-Wage Policy Income-Splitting Policy

Label ContentTreatment

Effect Label ContentTreatment

Effect

CCPA UofT 1180%** CLC UofC 198%**UofT Fraser Institute 268%*** Broadbent

InstituteYorkU 1128%**

CFIB CCPA 1248%*** UofC BroadbentInstitute

265%***

Fraser Institute WellesleyInstitute

158% IMFC CD HoweInstitute

1137%**

WellesleyInstitute

CFIB 276%*** CD HoweInstitute

CLC 230%

YorkU IMFC 265%**

*** p< 0.01, ** p< 0.05, *p< 0.10.

Doberstein: Perceptions of Credibility of Policy Research 11

We see precisely the same dynamic emerge within the income-splitting sub-

groups. When exposed to treatment conditions, the UofC and York University articles

show a 98 percent and 128 percent increase, respectively, in the odds of selecting a

worse rank for those subjects in the treatment conditions (different source attribution)

over the control conditions. Among the think tank and advocacy group articles, we

see that for the Broadbent Institute (left-wing) and IMFC (conservative) articles there

is a 65 percent and 80 percent decrease, respectively, in the odds of selecting a worse

rank for those subjects in the treatment conditions (labeled as academic articles) over

the control conditions. That is, that think tank and advocacy group content, when

presented as from academics, receives better rankings from policy analyst subjects.

The CD Howe Institute—centrist—think tank experiences a 137 percent increase in

the odds of selecting a worse rank for those subjects in the treatment conditions (IMFC

advocacy group) over the control conditions, demonstrating the general trend that

advocacy groups are viewed as having less credibility than think tanks, which are in

turn viewed as having less credibility than academic sources.

Finally, it is important to note that one might expect that the ideology of the sub-

jects may inform their perceptions of the credibility of different sources of policy

research, particularly that from ideologically identified think tanks and advocacy

organizations. The precise ideology of subjects was not obtained in the survey given

the sensitive nature of asking policy analysts in government about political preferen-

ces.12 Instead, each subject was asked, before they were exposed to the experimental

portion of the survey, whether they thought minimum-wage or income-splitting pol-

icy, respectively, was good or bad on a Likert scale. In this regard, we can indirectly

infer ideology in the regression model (see Tables A3–A6 in the Appendix). Yet in

no cases was the pretest normative question on the respective policy statistically sig-

nificantly correlated with treatment effects.

Conclusions and Implications

There are two central findings that emerge from this experimental study that

lend evidence to the two hypotheses identified earlier: (i) academic research has a

privileged position of credibility among policy analysts, followed by think tanks and

then advocacy organizations; and (ii) think tanks and advocacy groups with less ide-

ological orientation demonstrate (higher) credibility (and thus closer to academic

research), whereas strongly ideologically oriented sources receive (much lower) cred-

ibility scores, closer to those afforded to advocacy groups. The implications of these

findings are considered briefly below.

The first major implication of this study relates to the externalization of policy

advisory systems, and how we conceptualize various actors’ roles, and influence on

policymaking. Among recent changes in comparative public administration, the

“externalization” of policy advice is an increasingly identified trend, with scholars

interested in the extent to which partisan political aspects of policy advice have dis-

placed nonpartisan public-sector sources of policy advice. Indeed, as Craft and

Howlett (2013) suggest, the core proposition among policy advisory system models

is that “only some actors—be they internal or external—are able to influence

12 Policy Studies Journal, 00:00

policymaking and not others” (p. 189). Yet, if major sources of external policy

advice—think tanks and advocacy groups—are falling on deaf ears among policy

professionals in government, this matters a whole lot to how we conceptualize these

recent trends. This randomized controlled experiment allows us to take stock of how

policy analysts are responding to externalization dynamics of policy advisory sys-

tems, as well as provides more nuanced understandings of how such external sour-

ces of policy advice and analysis are perceived within the professional public

service. Whereas some have pointed to the blurring of the boundaries between aca-

demia, think tanks, and advocacy groups in the context of policy advice (Kipping &

Engwall, 2001), this study reveals that when it comes down to the fundamental eval-

uation of credibility and value, that blurriness may be overstated—policy analysts

make decisive distinctions between such sources.

Further, it does not appear as though trans-scientific evidence (i.e., evidence con-

nected to political problems and interests) is undermining the legitimacy of scientific

enterprise among this sample population—the academic contributions retain very

high credibility relative to the other sources. Thus we ought to take a closer and

more critical look at Heinrichs’s (2005) claim of self-destructive and de-legitimizing

dynamics with the inflationary use of differentiated scientific advice in the policy

process, as policymakers confront conflicting scientific advice (see also: Weingart,

1999). In contrast, in the context of expanding and differentiated external policy advi-

sory systems, policy analysts appear to lean disproportionately on academic research

compared to the others. And whereas Craft and Howlett (2012) place think tanks in

the category of “evidenced-based policymaking advice” alongside academics—

though they are careful to point out that not all think tanks are equally equipped to

take on this role—the empirics in this study reveal that professional policy analysts

in government tend to place them in a realm closer to interest or advocacy groups

(short-term, politically oriented policy advice), thus perhaps demanding a partial re-

conceptualization of how we think about external policy advice, and from what or

whose perspective we derive our conceptual typologies.

The shift from location-based to more complex content-based models of policy

advisory systems has its virtues, but what if key gatekeepers in the bureaucracy still

view policy advice from a “technical vs. partisan dichotomy” as the empirics in this

study suggest? Craft and Howlett (2012) depict and categorize policy advice system

structure and behavior along the dimensions of procedural/substantive vs. short-

term/long-term policy content (and away from location-based conceptualizations),

“to understand the role played by different policy actors and the kinds of advice pro-

vided to governments by different advisory systems in contemporary circumstances”

(p. 92). This conceptual division with empirics in this study matters because as Craft

and Howlett (2013) suggest, where we place various actors and institutions “shifts

the attribution of influence in advisory systems to a congruence of the nature of the

policy advice provided and the given issue” (p. 194).

Yet, at the same time, the results from this experiment also demonstrate that

there are risks involved in lumping all think tanks together, as some think tanks are

more independent from economic and political interests than others. The Fraser

Institute and the Broadbent Institute, outwardly ideological think tanks in Canada,

Doberstein: Perceptions of Credibility of Policy Research 13

saw their credibility jump when their research content had a different authorship

label. Yet for the CD Howe Institute in this study, which is the least outwardly ideo-

logical think tank, we observe a similar effect to that observed with the academic

articles, which characterized by declining credibility of their content when under a

different label. These findings are also aligned with Denham and Garnett’s (2006)

and Rich’s (2005) claim that the more pragmatic and less ideological of the think

tanks in the UK and United States, respectively, are those with the most credibility

and influence. Thus, when conceptualizing think tanks within policy advisory sys-

tems, this study finds resonance with Fraussen and Halpin’s (2016) differentiation

between strategic think tanks—those with high autonomy and research capacity—and

advocacy think tanks—those with low autonomy and high capacity—and provides evi-

dence that in the substantive realm of policy analysis, policy professionals in govern-

ment view them quite differently. Thus, while externalization of policy advice may

be an indisputable emerging trend when we look at the supply side, these findings

suggest that on the demand side (i.e., the perspectives of consumers of that policy

research), not all external sources of applied policy research and advice equally pene-

trate through the gatekeepers in professional public service.

The second major implication from this study relates to the heuristics employed

by professional policy analysts in government. The findings from this experiment

confirm that perceptions of status, authority, and expertise are strong filters to make

processing of information easier for policy analysts and policymakers. While these

findings are not especially counterintuitive and do not deviate considerably from

previous observational studies, their value stems from their basis in a randomized

controlled experimental conditions, coupled with quantitative measures of the

degree of credibility variation across the three main sources of policy research

encountered by policy analysts in government. The dramatic variation in credibility

assessments across sources has not been documented in this manner in the literature

and demands further reflection and inquiry on the implications to our understand-

ings of the nature and dynamics of policy advisory systems.

Policy analysts appear to draw heavily on a heuristic that privileges academic

policy research, likely based on an assumption of more rigorous standards and inde-

pendence of scholarly work. But is this appropriate? Many think tanks are led by

PhD-trained researchers, engage in some process of peer review (although rarely

double blinded), and have adopted key pieces of academic protocols, whereas mean-

while academic research is increasingly challenged on the weaknesses of its peer

review systems, publication biases, and the “wild west” of some open-access jour-

nals (Bohannon, 2013; see also: Ioannidis, 2014). And while some sectors of the aca-

demic community increasingly acknowledge that our research questions, agendas,

and products cannot be divorced from our individual politics (Wesselink et al.,

2013), policy analysts in government appear not especially critical of this fact.

This of course does not mean that think tanks or research-based advocacy organ-

izations do not have influence—this study did not aim to test that much larger and

more complex question.13 Policy analysts are simply building into their analysis their

understanding of both the methodological rigor and perceptible ideology of the indi-

vidual or organization as a source of policy advice, which acts as a filter on its

14 Policy Studies Journal, 00:00

credibility, value, and potential use. The credibility heuristic revealed by this experi-

ment against most think tank and research-based advocacy organizations matters

because bureaucrats remain the critical gatekeepers of policy research and analysis in

their role in briefing senior bureaucratic and elected leadership in government. Under

contemporary conditions of information overload, theoretical interest shifts to the fil-

tering of policy knowledge, and “the use of decision heuristics, rules, and other short-

cuts with all the biases these entail” (James & Jorgensen, 2009, pp. 151–52). As

researchers track the rapidly expanding policy advice systems within contemporary

bureaucracies, this study reminds us that it is not a purely pluralistic arena of policy

research and advice (Denham & Garnett, 2006; Stone, 2007; Wells, 2012), but instead is

one subject to powerful heuristics—with positive heuristics associated with academic

research and negative heuristics seemingly shaped by ideological source markers—

that bureaucrats use to sift through policy-relevant information and advice.

As a final note, this randomized controlled field experiment benefits from draw-

ing on real policy actors and real research products on two salient public policy

issues, yet there remain a number of limitations to this study that point toward

future research avenues. First, the sample size of N 5 258 may be relatively small

(although the population of interest in this case is only 1,108), especially after being

subdivided into four groups. With more subjects we would be able to discover more

precise relationships between the variables across the control and experimental

groups. Second, that subjects were exposed to research summaries, rather than full

articles, may have caused them to lean more on their knowledge of the source, rather

than the specific evidence and argument within the content, than they typically

would, and thus may overestimate the source effects.14 Replication and expansion of

the sample, as well as survey content modification, will be key to confirm or chal-

lenge these tentative experimental findings.

The other limitation of the study is with respect to the issue areas tested. Mini-

mum-wage and income-splitting policies are arguably more technocratic policy

debates, which lend themselves well to argument, evidence, and competing analysis.

Yet, one might not expect the same dynamic to be present on other policy issues that

have a more explicit moral, emotional, or ideological dimension, such as abortion,

nuclear energy, or military intervention policy. As such, more general claims about

how analysts in government perceive policy research ought to be avoided, as future

experiments may find different effects depending on the characteristics of the policy

issue.

Future research might also replicate this study with elected officials, as opposed

to professional public servants, to identify similarities and differences in their assess-

ments of applied policy research. One might hypothesize that elected officials, who

trade in more explicit ideological positioning than policy professionals, may afford

particular interest groups and think tanks with higher credibility than that which

was discovered for policy analysts in government. Such added nuance would allow

us to further refine policy advisory system models.

Additionally, future nonexperimental or observational research would be well

positioned to further probe bureaucrats on their conscious considerations when pro-

cessing and synthesizing policy research from different sources, which may help

Doberstein: Perceptions of Credibility of Policy Research 15

identify the mechanism at work, whether it is officials making inferences about bias

or about quality of policy research, or some other consideration. Yet, this experiment

represents a starting point to assess the credibility of different sources of applied pol-

icy research that are presented to analysts in government, the answer for which has

eluded researchers investigating various dimensions of influence in the policy pro-

cess and the dynamics of external policy advisory systems.

Carey Doberstein is an assistant professor of political science at the University of

British Columbia, Okanagan campus. Email: [email protected].

Notes

1. This author does not subscribe to this view. Academic research is increasingly challenged on the

weaknesses of its peer review systems, publication biases, and the “wild west” of some open-access

journals (Bohannon, 2013; see also: Ioannidis, 2014). And some sectors of the academic community of

course acknowledge that our research questions, agendas, and products cannot be divorced from our

individual politics (Wesselink et al., 2013).

2. Although the issues are distinct, they are related in the sense that they both affect individuals’ after-

tax income distribution, yet target different populations, in very different ways, and remain contro-

versial in public policy circles.

3. A diversity of argument and conclusions among the articles presented to respondents is important to

capture the broad parameters of the competitive nature of the policy debate in the scientific and trans-

scientific research communities and to emulate what policy analysts would encounter in their own

reviews of the policy literature.

4. Note that while the criteria for article selection is important for transparency, from a methodological

point of view the precise articles chosen from the larger sample are in fact irrelevant, given that the

same content is presented to respondents in the respective control and treatment groups, and thus the

differences measured are solely due to the effects of the source label. Thus, the key variable to be

tested is the source label, not the content, given that it is fixed across groups.

5. Alternatives to excerpting the conclusions were considered by the researcher, including using intro-

ductions or research judgment to excerpt other portions of the studies, but there was a strong sense

that the content and argument of the researchers should be as unadulterated as possible to avoid any

subconscious or inadvertent selection of text that might bias the assessment of the research by sub-

jects. Selecting the conclusions for each allowed for the text to be presented without any revisions

while also providing the reader with the key pieces of argument, evidence, and analysis.

6. Although in some cases, depending on the research question, citizens are indeed the appropriate sam-

ple. For example, Lachapelle et al. (2014) in this journal conducted experimental work on the public

perceptions of expert credibility on policy issues.

7. This also excludes senior leaders in the public service (Assistant Deputy Ministers and up), based on

the rationale that they are not likely to read the research directly, but instead rely on the briefings

materials prepared by the lower level analysts (who are the subjects in this experiment). Further, there

are far fewer senior public sector leaders than analysts and are reasonably assumed to be less likely

to respond to such surveys, thus preventing meaningful statistical analysis involving positions of that

rank.

8. It is difficult to estimate the extent of systematic nonresponse among the sample, as we do not know

their demographic features, only their contact information.

9. Assignment of new mismatched labels to content in the treatment was not done with an explicit theo-

retical aim in each case. The only aim was to change the label of the source from one category to

another (academic, think tank, and research-based advocacy organization).

16 Policy Studies Journal, 00:00

10. An ordinal logit estimator is a regression technique using ordinal dependent variable measures (as is

the case in this study), estimated by maximum likelihood estimation. The odds ratio presentation sup-

presses the coefficient estimates and instead presents the relative odds of success conditional on the

realization of a particular variable. An odds ratio less than 1 indicates that a variable is associated

with a relative decrease in the likelihood of success, whereas an odds ratio in excess of 1 indicates that

a variable is associated with a relative increase in the likelihood of success.

11. The model involves a proportional odds assumption, meaning that effects of the treatment are more

or less equal across thresholds of the dependent variable. A series of proportional odds tests were

conducted for each model and, while it held across many categories of the ordinal dependent vari-

able, it fell outside the assumption in several cases (see Tables A7 and A8 in the Appendix). This does

not mean that the model is invalid, but rather reflects that for a number of the articles, there are sub-

stantial numbers of respondents giving top (or bottom level) category responses, and the predictor

shows a large association for the top category response but a much smaller association for other

cumulative measures—and the significance markers indeed reflect this in the data. As such, the

cumulative odds ratio is a weighted combination of the several thresholded odds ratios, with a higher

weight placed upon the top (or bottom) category odds ratio.

12. This position is also informed from an unpublished, small-scale pilot of this study conducted by the

author in which a question measuring political preferences was included and resulted in substantial

attrition from the survey and written complaints from the subjects.

13. Indeed, scholars have differentiated the substantive and political use of scientific knowledge with

respect to policy (Daviter, 2015), and thus the credibility problems that ideological think tanks and

advocacy groups face among bureaucrats may not harm their ultimate influence, since they are

focused on political actors or public opinion, not necessarily convincing bureaucrats.

14. This may indeed be true, although testing this phenomenon with full articles without experiencing

large-scale attrition from what would be a very lengthy survey time represents a significant challenge,

but one certainly worth attempting in future research.

References

Abelson, Donald. 2009. Do Think Tanks Matter? Assessing the Impact of Public Policy Institutes. Montreal:

McGill-Queen’s University Press.

Bertelli, Anthony, and Jeffery Wenger. 2009. “Demanding Information: Think Tanks and the US Con-

gress.” British Journal of Political Science 39 (2): 225–42.

Blyth, Mark. 2002. Great Transformations. Cambridge, UK: Cambridge University Press.

Bohannon, John. 2013. ”Who’s Afraid of Peer Review?” Science 342 (6154): 60–65.

Carter, Caitriona. 2013. “Rendering Aquaculture and Fisheries Spaces for European Government: The

Politics of Sustainability.” Environmental Science and Policy 30: 26–35.

Ceccarelli, Leah. 2011. “Manufactured Scientific Controversy: Science, Rhetoric, and Public Debate.” Rhet-

oric & Public Affairs 14 (2): 195–228.

Conford, James. 1990. “Performing Fleas: Reflections from a Think Tank.” Policy Studies Journal 11 (4): 22–30.

Cottam, Martha L., Elena Mastors, Thomas Preston, and Beth Dietz. 2010. Introduction to Political Psychol-

ogy. New York: Routledge.

Craft, Jonathan, and Michael Howlett. 2012. “Policy Formulation, Governance Shifts and Policy Influence:

Location and Content in Policy Advisory Systems.” Journal of Public Policy 32 (2): 79–98.

———. 2013. “The Dual Dynamics of Policy Advisory Systems: The Impact of Externalization and Politi-

cization on Policy Advice.” Policy and Society 32 (3): 187–97.

Daviter, Falk. 2015. “The Political Use of Knowledge in the Policy Process.” Policy Sciences 48 (4): 491–505.

Denham, Andrew, and Mark Garnett. 2006. “What Works? British Think Tanks and the End of Ideology.”

The Political Quarterly 77 (2): 156–65.

Doberstein, Carey. 2016. Building a Collaborative Advantage: Network Governance and Homelessness Policy-

Making in Canada. Vancouver: UBC Press.

Doberstein: Perceptions of Credibility of Policy Research 17

Dobuzinskis, Laurent, Michael Howlett, and David Laycock, eds. 2007. Policy Analysis in Canada: The State

of the Art. Toronto: University of Toronto Press.

Doern, George Bruce, and Richard W. Phidd. 1983. Canadian Public Policy: Ideas, Structure, Process.

Toronto: Methuen.

Eichbaum, Chris, and Richard Shaw. 2007. “Ministerial Advisers and the Politics of Policy-Making: Bureau-

cratic Permanence and Popular Control.” The Australian Journal of Public Administration 66 (4): 453–67.

Fraser Institute. 2004. “Fraser Institute at 30: A Retrospective.” http://www.fraserinstitute.org/sites/

default/files/30th-anniversary-retrospective.pdf. Accessed January 12, 2016.

———. 2014. “Annual Report.” http://www.fraserinstitute.org/sites/default/files/fraser-institute-2014-

annual-report.pdf. Accessed January 12, 2016.

Fraussen, Bert, and Darren Halpin. 2016. “Think Tanks and Strategic Policy-Making: The Contribution of

Think Tanks to Policy Advisory Systems.” Policy Sciences, doi:10.1007/s11077-016-9246-0.

Hafner-Burton, Emilie M., D. Alex Hughes, and David G. Victor. 2013. “The Cognitive Revolution and

the Political Psychology of Elite Decision Making.” Perspectives on Politics 11 (02): 368–86.

Hall, Peter A. 1993. “Policy Paradigms, Social Learning, and the State: The Case of Economic Policymak-

ing in Britain.” Comparative Politics 25 (3): 275–96.

Halligan John. 1995. “Policy Advice and the Public Sector.” In Governance in a Changing Environment, eds. B.

Guy Peters and Donald J. Savoie. Montreal: McGill-Queen’s University Press, 138–72.

Hart, Paul, and Ariadne Vromen. 2008. “A New Era for Think Tanks in Public Policy? International

Trends, Australian Realities.” Australian Journal of Public Administration 67 (2): 135–48.

Head, Brian, Michele Ferguson, Adrian Cherney, and Paul Boreham. 2014. “Are Policy-Makers Interested

in Social Research? Exploring the Sources and Uses of Valued Information Among Public Servants

in Australia.” Policy and Society 33 (2): 89–101.

Heinrichs, Harald. 2005. “Advisory Systems in Pluralistic Knowledge Societies: A Criteria-Based Typol-

ogy to Assess and Optimize Environmental Policy Advice.” In Democratization of Expertise?, eds. Sab-

ine Maasen, and Peter Weingart. Dordrecht: Springer Netherlands, 41–61.

Howlett, Michael. 2011. “Public Managers as the Missing Variable in Policy Studies: An Empirical Inves-

tigation Using Canadian Data.” Review of Policy Research 28 (3): 247–63.

Ioannidis, John. 2014. “How to Make More Published Research True.” PLoS Medicine 11 (10): 1–6.

Jacques, Peter J., Riley E. Dunlap, and Mark Freeman. 2008. “The Organisation of Denial: Conservative

Think Tanks and Environmental Skepticism.” Environmental Politics 17 (3): 349–85.

James, Thomas E., and Paul D. Jorgensen. 2009. “Policy Knowledge, Policy Formulation, and Change:

Revisiting a Foundational Question.” Policy Studies Journal 37 (1): 141–62.

Jasanoff, Sheila. 1990. The Fifth Branch: Science Advisors as Policy Makers. Cambridge, MA: Harvard Univer-

sity Press.

Kipping, Matthias, and Lars Engwall, eds. 2001. Management Consulting. Emergence and Dynamics of a

Knowledge Industry. New York: Oxford University Press.

Lachapelle, Erick, �Eric Montpetit, and Jean-Philippe Gauvin. 2014. “Public Perceptions of Expert Credibil-

ity on Policy Issues: The Role of Expert Framing and Political Worldviews.” Policy Studies Journal 42

(4): 674–97.

Landsbergen, David, and Barry Bozeman. 1987. “Credibility Logic and Policy Analysis: Is There Rational-

ity Without Science?” Science Communication 8 (4): 625–48.

Levin, Marc A. 1991. “The Information-Seeking Behavior of Local Government Officials.” The American

Review of Public Administration 21 (4): 271–86.

Light, Stephen C., and Theresa Kemble Newman. 1992. “Awareness and Use of Social Science Research

Among Executive and Administrative Staff Members of State Correctional Agencies.” Justice Quar-

terly 9 (2): 299–324.

Lindquist, Evert. 1993. “Think Tanks or Clubs? Assessing the Influence and Roles of Canadian Policy

Institutes.” Canadian Public Administration 36 (4): 547–79.

18 Policy Studies Journal, 00:00

———. 1998. “A Quarter Century of Canadian Think Tanks: Evolving Institutions, Conditions and Strat-

egies.” In Think Tanks Across Nations: A Comparative Approach, eds. Diane Stone, Andrew Denham,

and Mark Garnett. Manchester: Manchester University Press, 127–44.

Lindvall, Johannes. 2009. “The Real but Limited Influence of Expert Ideas.” World Politics 61 (4): 703–30.

Majchrzak, Ann. 1986. “Information Focus and Data Sources When Will They Lead to Use?” Evaluation

Review 10 (2): 193–215.

Maley, Maria. 2000. “Conceptualising Advisers’ Policy Work: The Distinctive Policy Roles of Ministerial

Advisers in the Keating Government, 1991–96.” Australian Journal of Political Science 35 (3): 449–70.

McGann, James, and Richard Sabatini. 2011. Global Think Tanks: Policy Networks and Governance. Abing-

don, UK: Routledge.

McGann, James G., and Erik C. Johnson. 2005. Comparative Think Tanks, Politics and Public Policy. North-

hampton, MA: Edward Elgar Publishing.

McNutt, Kathleen, and Gregory Marchildon. 2009. “Think Tanks and the Web: Measuring Visibility and

Influence.” Canadian Public Policy 35 (2): 219–36.

Mooney, Christopher Z., and Richard G. Schuldt. 2008. “Does Morality Policy Exist? Testing a Basic

Assumption.” Policy Studies Journal 36 (2): 199–218.

Negev, Maya, and Naama Teschner. 2013. “Rethinking the Relationship Between Technical and Local

Knowledge: Towards Multiple Types of Knowledge.” Environmental Science and Policy 30 (June): 50–59.

Ozawa, Connie P. 1991. Recasting Science: Consensual Procedures in Public Policy Making. Boulder, CO:

Westview Press.

Pal, Leslie A. 2010. Beyond Policy Analysis: Public Issue Management in Turbulent Times. Scarborough, ON:

ITP Nelson.

Renn, Ortwin. 1995. “Styles of Using Scientific Expertise: A Comparative Framework.” Science and Public

Policy 22 (3): 147–56.

Rich, Andrew. 2005. Think Tanks, Public Policy, and the Politics of Expertise. Cambridge, UK: Cambridge

University Press.

Rich, Andrew, and R. Kent Weaver. 2000. “Think Tanks in the US Media.” The Harvard International Jour-

nal of Press/Politics 5 (4): 81–103.

Sherrington, Philippa. 2000. “British Think Tanks: Advancing the Intellectual Debate?” The British Journal

of Politics & International Relations 2 (2): 256–63.

Simon, Herbert A. 2000. “Bounded Rationality in Social Science: Today and Tomorrow.” Mind and Society

1 (1): 25–39.

Stone, Diane. 1996. “From the Margins of Politics: The Influence of Think-Tanks in Britain.” West Euro-

pean Politics 19 (4): 675–92.

———. 2007. “Recycling Bins, Garbage Cans or Think Tanks? Three Myths Regarding Policy Analysis

Institutes.” Public Administration 85 (2): 259–78.

Van de Walle, Steven, and Gregg G. Van Ryzin. 2011. “The Order of Questions in a Survey on Citizen Satisfac-

tion with Public Services: Lessons from a Split Ballot Experiment.” Public Administration 89 (4): 1436–50.

Weible, Christopher M. 2008. “Expert-Based Information and Policy Subsystems: A Review and Syn-

thesis.” Policy Studies Journal 36 (4): 615–35.

Weidenbaum, Murray. 2010. “Measuring the Influence of Think Tanks.” Society 47 (2): 134–37.

Weingart, Peter. 1999. “Scientific Expertise and Political Accountability: Paradoxes of Science in Politics.”

Science and Public Policy 26 (3): 151–61.

Weiss, Carol H. 1979. “The Many Meanings of Research Utilization.” Public Administration Review 39 (5): 426–31.

Wells, Peter. 2012. “Prescriptions for Regional Economic Dilemmas: Understanding the Role of Think

Tanks in the Governance of Regional Policy.” Public Administration 90 (1): 211–29.

Wesselink, Anna, Karen S. Buchanan, Yola Georgiadou, and Esther Turnhout. 2013. “Technical Knowledge, Dis-

cursive Spaces and Politics at the Science–Policy Interface.” Environmental Science & Policy 30 (June): 1–9.

Wilson, Richard. 2006. “Policy Analysis as Policy Advice.” In The Oxford Handbook of Public Policy, eds.

Michael Moran, Martin Rein, and Robert E. Goodin. New York: Oxford University Press, 152–68.

Doberstein: Perceptions of Credibility of Policy Research 19

Appendix

Table A1. Policy Research Articles Selected in the Survey Experiment

Minimum-Wage Policy Affiliation Author Title

Academicarticle

University of Toronto Campolieti et al. (2012) The (Non) Impact of Mini-mum Wages on Pov-erty: Regression andSimulation Evidence forCanada (Journal ofLabour Research)

Think-tankarticles

Fraser Institute Veldhuis andKarabegovic (2011)

The Economic Effects ofIncreasing BritishColumbia’s MinimumWage

Canadian Centre forPolicy Alternatives

Brennen and Stanford(2014)

Dispelling MinimumWage Mythology

Advocacy-grouparticles

Wellesley Institute Block (2013) Who Is Working For Mini-mum Wage In Ontario?

Canadian Federationfor IndependentBusiness

Braun-Pollon et al.(2011)

Minimum Wage:Reframing the Debate

Income-Splitting Policy Affiliation Author Title

Academicarticles

University of Calgary Krzepkowski andMintz (2013)

No More Second-ClassTaxpayers (School ofPublic Policy ResearchPapers, Vol. 6. Issue 15)

York University Philipps (2013) Real versus NotionalIncome Splitting: WhatCanada Should Learnfrom the US “InnocentSpouse” Problem(Canadian Tax Journal)

Think-tankarticles

Broadbent Institute Shillington (2014) The Big Split: IncomeSplitting’s unequal dis-tribution of BenefitsAcross Canada

CD Howe Institute Laurin andKesseiman (2011)

Income Splitting for Two-Parent Families: WhoGains. Who Doesn’t,and at What Cost?

Advocacy-grouparticles

Canadian LabourCongress

Weir (2011) Income Splitting WouldWorsen Inequality

Institute for Marriage& Family Canada

Watson et al. (2014) Busting Income SplittingMyths

Table A2. Summary of Demographic and Subject Characteristics Across Treatment and ControlGroups

Minimum Wage Income Splitting

VariableControl(N559)

Treatment(N562)

Control(N567)

Treatment(N570)

Gender Male 43.9% 40.3% 38.8% 32.8%Female 56.1% 59.7% 61.2% 67.2%

Education High school 0% 1.6% 1.5% 0%Diploma 5.3% 8.1% 1.5% 5.9%Undergraduate 38.6% 35.5% 26.9% 29.9%

20 Policy Studies Journal, 00:00

Table A2. cont.

Minimum Wage Income Splitting

VariableControl(N559)

Treatment(N562)

Control(N567)

Treatment(N570)

Graduate 56.1% 54.8% 70.1% 59.7%No answer 0% 0% 0% 1.5%

Years in publicsector

Less than 1 year 5.3% 6.5% 4.5% 1.5%1–5 years 22.8% 16.1% 16.4% 6.0%6–10 years 21.1% 35.5% 20.9% 37.3%11–15 years 21.1% 9.7% 11.9% 23.9%16–20 years 3.5% 9.7% 14.9% 11.9%201 years 26.3% 22.6% 31.3% 17.9%No answer 0% 0% 0% 1.5%

Position Advisor 17.5% 11.3% 11.9% 17.9%Director/manager 19.3% 24.2% 29.9% 23.9%Economist 5.3% 4.8% 6.0% 6.0%Planner 5.3% 1.6% 7.5% 4.5%Analyst 42.1% 51.6% 41.8% 44.8%Researcher 10.5% 6.5% 3.0% 1.5%No answer 0% 0% 0% 1.5%

Pre-test: Minimumwage opinion

Very good 45.6% 43.5% 14.9%* 10.4%Somewhat good 33.3% 37.1% 38.8%* 29.9%Neutral 12.2% 16.1% 13.4% 19.4%Somewhat bad 1.8% 0% 14.9% 25.4%Very bad 0% 1.6% 6.0% 7.4%Don’t know 5.3% 0% 10.4% 7.5%No answer 1.8% 1.6% 1.5% 0%

*p< 0.1.

Table A3. Minimum Wage Ordinal Logistic Regressions (Ordinal Credibility Measure)

LOFT FRASER CCPA WELL CFIB

TreatmentLabels fi

CCPA UOFT CFIB FRASER WELL

VariablesTREATMENT 21.139*** (0.413) 1.392*** (0.403) 20.585 (0.385) 20.453 (0.385) 20.278 (0.391)GENDER_male 20.093 (0.415) 20.698* (0.387) 0.477 (0.398) 20.254 (0.395) 0.463 (0.409)EDUC_no_grad 20.657 (0.406) 20.208 (0.725) 0.177 (0.384) 0.527 (0.393) 1.201*** (0.417)JOB_manager 21.025* (0.406) 20.069 (0.477) 0.438 (0.489) 20.167 (0.499) 0.464 (0.495)PRE_TEST_good 1.548 (1.249) 20.907 (1.263) 20.750 (1.822) 21.041 (2.085) 20.419 (1.337)Odds ratio_TREAT 0.3201 3.9155 0.5568 0.6360 1.3211

Standard errors in parentheses; *** p< 0.01, * p< 0.1.

Table A4. Income Splitting Ordinal Logistic Regressions (Ordinal Credibility Measure)

UOFC YORKU BROAD CDHOWE CLC IMFC

Treatment Labels fi CLC BROAD UOFC IMFC CDHOWE YORKUVariablesTREATMENT 20.240

(0.366)20.171(0.362)

0.333(0.369)

20.505(0.397)

0.899**(0.387)

1.491***(0.381)

GENDER_male 0.386(0.374)

0.084(0.376)

20.591(0.382)

0.290(20.405)

20.300(0.390)

20.017(0.367)

EDLC_no_grad 1.010(1833)

20.734(1.887)

1.850(1.908)

3.268*(1.962)

0.894(1.982)

1.903(1.770)

JOB_manager 20.401(0.407)

20.495(0.411)

20.528(0.416)

20.651(0.453)

20.493(0.419)

0.167(0.403)

PRE_TEST_good 20.508(0.435)

20.475(0.447)

20.704(0.454)

0.320(0.476)

20.717(0.466)

0.821*(0.439)

Odds ratio_TREAT 0.7866 0.8430 1.3945 0.6038 2.4576 4.4423

Standard errors in parentheses; *** p< 0.01, ** p< 0.05, * p< 0.1.

Doberstein: Perceptions of Credibility of Policy Research 21

Table A5. Minimum Wage Ordinal Logistic Regressions (Ranked Outcome Variable)

UOFT FRASER CCPA WELL CFIB

Treatment Labels fi CCPA UOFT CF1B FRASER WELLVariablesTREATMENT 1.029** (0.427) 21.127*** (0.394) 1.248*** (0.403) 0.458 (0.386) 21.415*** (0.421)GENDER_male 0.158 (0.428) 20.077 (0.401) 20.112 (0.397) 0.061 (0.405) 20.033 (0.422)EDUC_no_grad 0.714 (0.419) 20.149 (0.396) 20.239 (0.393) 0.089 (0.409) 20.344 (0.407)JOB_manager 0.967 (0.488) 20.353 (0.475) 20.493 (0.472) 0.676 (0.482) 20.552 (0.473)PRE_TEST_good 22.474 (1.308) 0.429 (1.49) 20.510 (1.775 1.537 (1.758) 21.094 (1.300)Odds ratio_TREAT 2.7985 0.3240 3.4844 1.5807 0.2429

Standard errors in parentheses; *** p< 0.01, ** p< 0.05.

Table A6. Income Splitting Ordinal Logistic Regressions (Ranked Outcome Variable)

UofC YORKU BROAD CDHOWE CLC IMFC

Treatment Labels fi CLC BROAD UOFC IMFC CDHOWE YORKUVariablesTREATMENT 0.908**

(0.479)0.826**(0.418)

21.048***(0.375)

0.864**(0.385)

20.361(0.385)

21.590***(0.485)

GENDER_male 20.770**(0.460)

20.147(0.431)

0.746**(0.377)

20.328(0.386)

0.445(0.402)

0.463(0.456)

EDUC_no_grad 0.024(0.447)

1.203(1.603)

13.983***(0.237)

215.826***(0.234)

20.010(1.637)

21.069(1.606)

JOB_manager 0.379(0.536)

20.410(0.449)

0.382(0.406)

0.014(0.436)

20.528(0.458)

20.035(0.477)

PRE_TEST_good 20.124(0.133)

20.089(0.140)

0.074(0.130)

0.016(0.151)

0.177(0.135)

20.229(0.173)

Odds ratio_TREAT 1.9791 2.284 0.3507 2.3719 0.6970 0.2039

Standard errors in parentheses; *** p< 0.01, ** p< 0.05.

Table A7. Proportional Odds Test for Minimum-Wage Subgroup. Values Are Odds Ratios

RANK UofT FRASER CCPA WELL CFIB

1 1.27*** 4.49 0.95 0.91 2.232 2.99** 1.45 0.27 0.62 1.913 1.76 0.64 0.46 0.62 2.264 1.26 0.28** 1.90 1.38 0.555 1.70 0.51** 3.70** 1.72 0.22***

***p< 0.01, **p< 0.05, *p< 0.10.

Table A8. Proportional Odds Test for Income-Splitting Subgroup. Values Are Odds Ratios

RANK UofC YORKU BROAD CDHOWE CLC IMFC

1 0.12 0.17 2.94*** 0.44* 1.40 3.112 0.12 0.56 0.95 0.65 1.16 4.643 0.18 0.69 0.08 0.46 1.36 4.104 1.60 0.65 0.05*** 2.03 1.26 0.55*5 1.74 2.68*** 0.10 3.71 0.37 0.52*6 3.03** 1.82** 0.01 3.70** 0.52 0.15*

***p< 0.01, **p< 0.05, *p< 0.10.

22 Policy Studies Journal, 00:00


Recommended