+ All Categories
Home > Documents > The Politics of Accountability

The Politics of Accountability

Date post: 24-Jan-2023
Category:
Upload: ubc
View: 0 times
Download: 0 times
Share this document with a friend
23
1 MARY E. CORCORAN LECTURE MINNESOTA EVALUATION STUDIES INSTITUTE, 2009 The Politics of Accountability Sandra Mathison University of British Columbia It seems that for the last ten years or so I have begun almost ever talk or workshop that I have done with this query… Who is accountable to whom, for what, through what means, and with what consequences? I would go on to illustrate the hierarchical nature of accountability relations. That is, that accountability gives those with the power to grant authority to others the privilege of asking those to whom they have given authority to be responsible and report on their actions. And, I would further point out that this is a one-way flow of accountability. Those who have power are not usually those who must account for their actions. I would also go on to argue that the mechanisms used for accountability purposes are mostly too simplistic, driven by the interests of capital, and can be corrupted to increase the perception of success in the absence of real success. I would also argue that threats of punishment are counter-productive and strip people of their natural and intrinsic motivations to do as well as they can, and
Transcript

1

MARY E. CORCORAN LECTURE

MINNESOTA EVALUATION STUDIES INSTITUTE, 2009

The Politics of Accountability

Sandra Mathison

University of British Columbia

It seems that for the last ten years or so I have begun almost ever talk or

workshop that I have done with this query…

Who is accountable to whom, for what, through what means, and with

what consequences?

I would go on to illustrate the hierarchical nature of accountability relations. That

is, that accountability gives those with the power to grant authority to others the

privilege of asking those to whom they have given authority to be responsible

and report on their actions. And, I would further point out that this is a one-way

flow of accountability. Those who have power are not usually those who must

account for their actions.

I would also go on to argue that the mechanisms used for accountability

purposes are mostly too simplistic, driven by the interests of capital, and can be

corrupted to increase the perception of success in the absence of real success.

I would also argue that threats of punishment are counter-productive and strip

people of their natural and intrinsic motivations to do as well as they can, and

2

that threats of punishment or public humiliation are a key factor in the

corruptibility of the means of accountability.

And then I would argue for a redefined accountability—I would argue for

something other than the outcomes based accountability that has taken hold of

the public (and much of the private) sector. A movement into which life has been

breathed by the Clinton era Government Performance and Reporting Act, and

which we see manifest in accountability schemes like No Child Left Behind. I

would argue for an approach that is more democratic, that disperses power and

authority, that uses more complex and nuanced mechanisms.

Today I want to interrogate this query in a little different way. I want

instead to ask what this question has to do with evaluation—or more generally,

what does evaluation have to do with accountability?

Never before has public concern and distrust been so widespread—in

both the public and private sectors, from schools to Wall Street, from the

manufacturing sector to health care. Public confidence in institutions to meet our

individual and collective needs has been deeply eroded. While once we had

confidence that professionals (like teachers, doctors, and accountants) and

leaders (like politicians and ministers) had deeply held and ethical commitments

to doing the right thing, too many transgressions have lead to a deep skepticism.

We are in what has been called the era of ‘new public management.’

3

Let me set the stage a bit. How have we come to this new public

management and what is it? Until the late 1970s and early 1980s, accountability

was more an expectation than a process. In general, what is referred to as

professional accountability prevailed—the expectation that professional

judgement and action was informed by good reasons and an acceptance of the

authority of those providing the reasons.

Professional accountability is self-regulation by a group of professionals

and might be expressed like this…

Doctors are accountable to the certifying body of physicians, for providing

appropriate quality health care, through adherence to standards of

knowledge, practice and ethics established by the American Medical

Association [the doctors themselves], with the understanding that the

failure to live up to these standards will result in the lose of certification or

licensure to practice as a doctor.

While recent scandals are often cited as the reason for a shift away from

professional accountability, in fact, there are examples of what are now

commonly and widely held senses of inadequate or inappropriate professional

knowledge.

Let me give you an example.

In 1975, almost thirty-five years ago now, PL 94-142 or the Education for

All Handicapped Children Act was passed, legislation that required states to

4

provide free and appropriate public education to all students with disabilities.

Previously, educational professionals had determined that children with

disabilities ought to remain at home, attend segregated special schools, or be

institutionalized—reasoning that public school classrooms were only appropriate

for children who were normal. Parents challenged the authority of educational

professionals and their reasons for their acts of exclusion, and parents became a

major force in the creation and passage of this legislation.

This example illustrates lose of confidence in professional judgement, and

also more fundamental disagreements about values, purposes, and practices

upheld by professions, especially when they conflict with the values and

purposes of other stakeholder groups. That the judgement of professionals can

be challenged and the singular privilege of professionals to be self-regulating be

disrupted by the interests of one or more stakeholder groups clearly signals that

accountability is a political process—one that results from competing and

coalescing interests.

So historically, both disagreements about whether the public good has

been served by professional accountability AND incidences of professional

misconduct, including but not limited to malfeasance, have lead to a

disintegration of professional accountability and the rise of a regulatory

accountability. Regulations, like PL 94-142, begin to take the place of educators’

professional judgement.

Challenges to professional accountability can and do come from many

different stakeholders.

5

In the case of PL 94-142, parents championed the rights of their children

and all children with disabilities to receive a fair education in public schools. And

we are want to feel hopeful about what we sense is a more democratic

consequence. But it is important to pause and see that there are a host of

stakeholders who challenge professional accountability in the name of their own

interests, and we may not feel so warmly about all challenges.

For example, No Child Left Behind is a regulatory response driven

primarily by the interests of politicians and corporate CEO’s given a deep

dissatisfaction with the role of schools in preparing a skilled workforce that

contributes to the American global economic competitiveness. A number of

national summits and roundtables have occurred since 1996 and the people

participating in those summits have been state governors and business leaders,

and much less frequently educators. Indeed the final summit held in 2001 at the

IBM Conference Center in Pallisades NY that was the culminating event that

launched the passage of NCLB included only politicians and business leaders.

And special precautions were taken to avert widely publicized protests by

teachers and educational activists.

Politicians and business leaders dissatisfaction with schools lays blame

with educational professionals and is manifest in claims of inadequate

professional knowledge and skill and professional protectionism, such as

privileging longevity over excellence in job security. The consequence is the

same as with parental dissatisfaction with professional educators’ views on

6

educating special needs children—a regulatory response to change what

happens in schools.

These regulatory responses are key to the new public management, or

what I call outcomes based accountability. Regulatory accountability might be

expressed something like this…

Schools are accountable to the federal government for demonstrating annual

yearly progress (or AYP) within all sub-categories of students as indicated by

state mandated achievement tests in literacy and math, and failing schools must

notify parents they are failing, develop a plan for improvement and/or undergo

restructuring.

Regulatory accountability is a shift to determinations about what

professionals should do that is external to the institutions within which they work,

rather than being decided by the professionals working within those institutions.

This form of accountability vests authority in governments, but not by simply

putting the government in charge. Rather, governments are agents that support

free markets by creating the conditions that allow markets to operate to maximize

effectiveness, profits and efficiency. Government regulations are the primary

means for creating these conditions.

This is the political theory of neo-liberalism.

Let me share a longish quote from Olssen that captures the role of

governments in neo-liberalism…

7

[N]eo-liberalism has come to represent a positive conception of the state’s role in creating the appropriate market by providing the conditions, laws and institutions necessary for its operation. In the shift from classical liberalism to neo-liberalism, then, there is a further element added, for such a shift involves a change in subject position from “homo economicus,” who naturally behaves out of self-interest and is relatively detached from the state, to “manipulatable man,” who is created by the state and who is continually encouraged to be “perpetually responsive.” It is not that the conception of the self-interested subject is replaced or done away with by the new ideals of “neo-liberalism,” but that in an age of universal welfare, the perceived possibilities of slothful indolence create necessities for new forms of vigilance, surveillance, “performance appraisal” and of forms of control generally. In this model the state has taken it upon itself to keep us all up to the mark. The state will see to it that each one makes a “continual enterprise of ourselves” . . . in what seems to be a process of “governing without governing.” (Olssen, 1996, p. 340)

So whether you are a fan of neo-liberalism or not, this is the global political

context in which we find ourselves. And with that, a focus on outcomes based

accountability where the desirable outcomes are established by special interests

(often elites, and often capitalist, corporate elites) and fostered by governmental

regulation and surveillance.

In evaluation, the state role takes an even more sinister form. Not only do

government regulations facilitate definitions of what is good and right, but

additionally the US government has assumed the role of evaluator in the public

interest. This governmental role as evaluator is sometimes confined to programs

the government funds, such as the US Office of Management and Budget’s

system for evaluating and publicizing whether government funded programs work

or not. Expect More (http://www.expectmore.gov) offers to tell the public which

programs are performing effectively and ineffectively, as well as those about

8

which the jury is still out.

But other government resources reach beyond government-funded

programs to let the citizenry know what works. The best example of this is the

What Works Clearinghouse, created in 2002 by the US Department of

Education’s Institute for Educational Science “to provide educators,

policymakers, researchers, and the public with a central and trusted source of

scientific evidence of what works in education.” In both cases, the government

assumes the role of telling the public what the best choices are. (How the

government knows what is best is something I will pick up a little later.)

There are, of course, private non-governmental agencies that offer similar

services to the public, such as SchoolMatters, a product of Standard and Poors

which is owned by McGraw-Hill Companies—one of the biggest producers of

educational tests. “SchoolMatters gives policymakers, educators, and parents the

tools they need to make better-informed decisions that improve student

performance. SchoolMatters will educate, empower, and engage education

stakeholders.”

And, in Canada, the Fraser Institute publishes school rankings for half of

the country’s provinces based on provincially mandated student achievement

tests because they contented that, “An educational market, one in which parents

choose their children's schools and schools compete more freely for students,

will produce better educational results for more students.” CanWest Global

Communications, the company that owns many Canadian daily newspapers,

implicitly supports this contention by publishing newspaper inserts with these

9

‘reports’ of the quality of schools prepared by the Fraser Institute and relegating

alternative views to the op ed pages.

These examples are meant to illustrate the importance of the state’s role

within a neo-liberal political framework. A role that is critical in establishing the

new public management and fostering a coalescing of political and capitalist

interests into systems for managing and controlling public life.

What Does Evaluation Have to do with Accountability?

On the surface it seems that accountability and evaluation might be the

same thing, or at least that evaluation is the essential means by which

accountability happens. As evaluators, we should be all over this idea of

accountability, embracing the idea, knowing that what we have to offer will be

useful, and getting to work. But this is not so. Why do many evaluators resist

accountability, see accountability as an impediment to or contrary to their work?

A little deconstruction of the specifics of the accountability question might

give us some insight.

The clearest connection between accountability and evaluation is that

evaluation is the “means” that facilitates the determination of whether one has

acted in the ways expected. Consider, for example, this quote, which is typical in

the literature on accountability:

Accountability is holding someone responsible for what they are supposed to do. Evaluation is the documentation used to prove that what was

10

supposed to be done, in fact, was done and to determine how well it was done. (McKenna, 1983).

But the literature on accountability also describes a variety of means that can be

used for accountability purposes. For example, the One World Trust identifies

four dimensions (which I take to be means) of accountability—evaluation is one

along with participation, transparency, and complaints & redress. Similarly, in his

discussion of accountability for non-government organizations, Ebrahim (2003)

identifies five means used in accountability—he includes evaluation along with

reports and disclosure statements, participation, self-regulation, and social

audits.

This suggests that evaluation is one among several possible means used

in accountability. But is this really the case? For example, every not-for profit

organization must annually prepare detailed reports for the IRS disclosing its

finances and organizational structure. Some may see it as a bit of a stretch to

describe the IRS’s audit of such disclosures as an evaluation, but what the IRS

does is to use an established set of characteristics and modes of practices (what

might be called criteria and standards) to determine if a particular 501C

organization meets the IRS expectations for proper and thus good conduct. Good

in the sense of being as it should be, rather than doing good things. Admittedly,

this is a weak evaluation, one that focuses on compliance rather than say the

value of the contributions the 501C makes in its domain of operations.

11

So, I think it is reasonable to say that, in fact, all accountability requires

evaluation, that value judgements have a central role within accountability, that

accountability is all about making value judgements.

Sometimes this evaluation is weak and seeks only information that

confirms compliance with rules, regulations or expectations, but very often there

is a stronger sense of evaluation implied within accountability. To quote Dunsire’s

(1978) description of accountability within bureaucracies…

the account when rendered, is to be evaluated by the superior or superior body measured against some standard or some expectation, and the differences noted: and then praise or blame are to be meted out and sanctions applied. It is the coupling of information with its evaluation and application of sanctions that gives ‘accountability’ or ‘answerability’ or ‘responsibility’ their full sense in ordinary usage.

Although all accountability is evaluation the obverse is not true, that is, not

all evaluation is accountability. Making a value judgement does not in and of itself

imply that anyone is being held to account, nor that there are any consequences,

either good or bad, that ensue from the value judgement. While working in my

garden I may judge a rose a perfect rose—a value judgement that does not hold

me as the gardener or the supplier from whom I buy my roses to account and

that does not imply any consequences. It is simply a declaration of the value of

this rose based on accepted standards for a perfect rose.

Indeed we do evaluation for many purposes other than accountability, and

there in lies one of the sources of evaluators’ disquiet with accountability. All that

we do or want to do as professional evaluators is not in the name of holding

someone to account. In addition…

12

we evaluate with the intent to improve things,

we evaluate to learn and appreciate,

we evaluate to build self-evaluation capabilities,

we evaluate to discern what needs are,

we evaluate so that an informed decision can be made.

These other worthy goals of evaluation get washed away by the evaluation as

accountability tide. And this is a denial of decades of theoretical and practical

work in evaluation, work that strives to find the good and make the not so good

better in more complex, often contextualized ways.

But back to the basic structure of accountability. That is, who is accountable to

whom for what by what means and with what consequences?

In professional accountability, evaluation plays a prominent role in two

ways. First is the accreditation of programs that educate professionals

adequately so they can be certified to practice their profession. Second is the

process of investigating and adjudicating accusations of professional misconduct

through investigations, usually internal investigations by specially appointed

persons or groups within the professional certifying bodies.

In a system of professional accountability, there is really no special role for

evaluators qua evaluators. Judgements about what is good and right are

structurally defined by the profession, and professionals themselves are seen as

best positioned to make those judgements. Citizen complaints about police are

13

assumed to be best handled through internal police investigations of conduct,

rather than say by citizen review boards. Judgements about the quality of degree

programs to educate psychologists are made by psychologists, those who are

within the degree programs and peers from other institutions with similar

programs.

In regulatory accountability, evaluators do play a special role. I will return

to the issue of who is accountable to whom and for what, but for now let me

focus on the ‘by what means’ part of accountability. As I mentioned earlier, many

discussions of regulatory accountability list evaluation among several means by

which one is held to account. This construction equates evaluation with providing

information—designing and refining instrumentation that measures outcomes

specified by others, and overseeing the collection and analysis of data using that

instrumentation. The focus is on the technical aspects of managing and reporting

evidence and evaluators become technicians in this formulation.

But even more critical than equating evaluation with data management

and reporting, is the likelihood within regulatory accountability to have BOTH the

FORMS of data and data collection METHODS prescribed outside of the

evaluation context and prescribed by those with the authority to demand an

account of others. A contemporary, but now classic, example of this is the US

Department of Education’s 2003 directive about how evaluation of educational

programs ought to be done. Here is a quote from the Department of Education

directive…

Evaluation methods using an experimental design are best for determining project effectiveness. . . . If random assignment is not feasible, the project

14

may use a quasi-experimental design with carefully matched comparison conditions. . . . For projects that are focused on special populations in which sufficient numbers of participants are not available to support random assignment or matched comparison group designs, single-subject designs such as multiple baseline or treatment-reversal or interrupted time series that are capable of demonstrating causal relationships can be employed. . . Proposed evaluation strategies that use neither experimental designs with random assignment nor quasi-experimental designs using a matched comparison group nor regression discontinuity designs will not be considered responsive to the priority when sufficient numbers of participants are available to support these designs (DOE, 2003, p. 62446).

The US Department of Education justified this methods directive by appealing to

a particular notion of what counts as rigorous inquiry, and by so doing directly

and successfully challenged the knowledge and experience of many professional

evaluators. While there are federal agencies that accept methodological diversity

(like the National Science Foundation, the Department of Justice and the Bureau

of Indian Affairs) there are many (like the Department of Education, the National

Institutes of Health, the Office of Management and Budget) that seek to regulate

methods by promoting a methodological monism. We don’t yet know how these

efforts to regulate what is an acceptable method will play out over time, but for

now what we see is the regulation of knowledge that is to be used in

accountability.

With the regulation of methods comes a concomitant valuing of parsimony

in regulatory accountability. Given the neo-liberal backdrop to regulatory

accountability there is a focus on simple parsimonious means for holding people

to account. Econometrics drives thinking in regulatory accountability and single

or at least a small number of indicators of quality are valued. The Dow Jones

Industrial Average, for example, stands as the indicator of economic health. The

15

Dow opened at 7349 and closed at 7270, down 80 for the day. This statement is

widely accepted as a pretty good indicator of how the economy is doing. (I have

to say the indicator that is most compelling to me though is the bottom line on my

retirement statement!)

Within regulatory accountability a single or a few indicators are often

provided to evaluators OR… evaluators are asked to identify the best single

indicators to use to demonstrate accountability. In education, student

achievement scores on standardized tests have become de facto the evidence

for judging the quality of schools and schooling. And the idea that all you need is

standardized tests to hold teachers, administrators and schools accountable is

deceptively simple. It is a neat and contained system of who is being held

accountable and by what means with what consequences.

Richard Elmore describes this seductive formula…

“Students take tests that measure their academic performance in various subject areas. The results trigger certain consequences for students and schools—rewards, in the case of high performance, and sanctions for poor performance…. If students, teachers, or schools are chronically low performing, presumably something more must be done: students must be denied diplomas or held back a grade; teachers or principals must be sanctioned or dismissed; and failing schools must be fixed or simply closed” (Elmore, 2002).

The assumption is that the threat of failure will motivate students to learn

more, teachers to teach better, educational institutions to be better, and the level

of achievement to continue to rise. And all of this can be accomplished just by

collecting standardized achievement scores.

16

Let me return to the ‘who is accountable to whom’ part of the

accountability query. In general, accountability relationships are hierarchical and

asymmetrical, and those who are held to account are granted the authority to do

their work by those to whom they are accountable. Additionally, this hierarchical

accountability relationship often has a fiscal dimension, where money flows from

those with power to those who have been granted authority to carry out the

wishes of those in power.

In professional accountability, professionals receive the authority to act as

professionals through certification and licensure that are granted by professional

associations and sometimes governmental agencies. In return for being granted

this authority, professionals agree to give account for their actions to those who

have granted the authority. For example, in British Columbia where I live,

teachers are licensed by the British Columbia College of Teachers. Their

authority to teach in an elementary or secondary school is granted by the College

of Teachers and, in turn, teachers are accountable to the College of Teachers for

proper, competent and ethical professional conduct. So, in many cases, there is

an acceptable and mutually agreed upon accountability relationship—this

relationship is institutionalized and rationalized.

But there are many areas in which this relationship is contested, and this

is especially so within regulatory accountability. This is because regulatory

accountability mostly entails a distance and disconnect between those who are

being held to account and those holding them to account. Mostly, those who

control accountability are separate from the organizational or institutional actors

17

who are being held to account. For example, in the accountability relationships

established by No Child Left Behind, teachers and administrators working in

schools are held to account by the US Department of Education, which has no

specific role in or knowledge of the circumstances of any particular school. And

indeed, many would say ought not to have any say in education, which has

historically been a local matter.

Evaluators are accustomed to working in contexts where power

relationships are a factor in the evaluation process. So this aspect of

accountability is not novel to evaluation, but regulatory accountability

underscores this dimension. All evaluators confront possible dilemmas regarding

whose interests will be served—as manifest in what questions are asked, who is

involved, what kinds of evidence are collected, and who is privileged to share in

the evaluation. Within regulatory accountability though, evaluators, already cast

largely in a role as technicians, are less likely to be in a position to renegotiate

how interests are served or to disrupt the authority defined by the accountability

relationship.

A recent issue of New Directions for Evaluation is dedicated to an analysis

of the relationship between evaluation and the regulations of No Child Left

Behind. Both internal and external evaluators’ work becomes defined by the US

Department of Education’s definition of who will be accountable to whom. The

stories in this issue of NDE clearly illustrate how both the work of educational

professionals AND evaluators are defined by the Department of Education.

Evaluators working within this accountability framework cannot resist the

18

hierarchy and through their participation in accountability reinforce asymmetrical

power relationships, and also affirm their role as technicians.

Even though all evaluators experience the hegemony of what Scriven calls

the “managerial ideology” evaluation as accountability intensifies this ideology

tremendously.

There is another dimension of the ‘who is accountable to whom’ part of the

accountability query worth noting. Individuals or groups of individuals are the

locus of control within accountability—success or failure is attributed to people.

Indeed, accountability is commonly seen as a search for blame when desirable

end states are not achieved. However, in many organizations and arenas of

social life, the individuals who are being held to account may have control over

only some of what effects outcomes.

Again, using education as an example, schools, and more specifically

teachers, are blamed for the achievement gap, for the failure of their schools to

make annual yearly progress. And, they are blamed even though there is plenty

of evidence that factors outside teachers’ control, like socio-economic status,

effect academic achievement as much as anything that happens while children

are at school.

This structure of holding individuals or groups of individuals to account is

not consistent with what evaluators generally do. Evaluators look at success and

failure within contexts, contexts that certainly include human resources but also

many other factors, and with an expectation that success or failure is almost

never simply attributable to one factor.

19

If all accountability is evaluation, then regulatory accountability is a kind of

evaluation that I believe most evaluators are uneasy with. It is a kind of

accountability that de-professionalizes evaluation, recasting it instead as an

information gathering process to meet the needs of the special interests of those

in power, most especially coalitions of politicians and capitalists.

Well, so what? This is a pretty grim picture of the nature of evaluation

within the new public management. And, it often feels that evaluation is dying a

slow death from the malady of regulatory accountability. And, it IS in many many

contexts. And ALSO, in many many contexts in which evaluators work

accountability is not the primary kind of evaluation that is expected or desired.

The forms of accountability are a result of larger political forces,

accountability is a manifestation of social, economic and political values and

represent a for now, in these times set of socio-political relationships. I’ve traced

just a little of this historical development from a focus largely on professional

accountability to a focus on regulatory accountability. And, the forms of

accountability will change yet again, although I don’t have a crystal ball that tells

me when or how.

I can’t resist concluding with at least another possibility. I guess deeply

embedded in my cynicism lies an idealism that embraces hope and change.

20

If all accountability is evaluation, how might accountability be different.

Different so that as evaluators we rest easier and work productively within the

conception of accountability and in our role as agents in accountability?

The form of accountability I would offer that might accomplish this is

pluralistic democratic accountability, an approach that asks stakeholders to enter

a compact of mutual, collective responsibility for particular domains of social life.

Such an approach to accountability places far greater emphasis on internal

accountability, that is, mutual responsibility within contexts and communities

where social actors can and do know each other, and in which there is real

participation in actions within a common domain. It is an approach that focuses

on learning and improvement, rather than blaming and punishing.

Pluralistic democratic accountability might look something like this… Governments, school administrators, teachers, parents, and the community are

accountable to one another for the education of children as demonstrated by

opportunities for all students to learn; student outcomes in cognitive, physical,

social and emotional domains; and the creation of positive learning oriented work

environments with the expectation continual improvement will be the outcome.

But can evaluators do anything to move us toward a pluralistic democratic

accountability? And, the answer is yes we can, but not easily.

Over the years I have written a number of articles and chapters that

analyze how evaluation can and ought to be a democratizing force in education.

But, the analysis applies to all contexts of public life.

21

Evaluators need to assume leadership roles, ones in keeping with the

AEA guiding principle of “responsibility to the general and public welfare.” There

are two contexts in which adherence to this guiding principle can occur—in the

doing of a specific evaluation and as a professional evaluation community.

In a particular evaluation, evaluators cannot be technicians serving the

interests of decision-makers, but instead must accept responsibility for creating

an evaluation process that is in the public interest. There are approaches to

evaluation, like deliberative democratic evaluation, participatory evaluation, and

utilization focused evaluation, that seem to support this idea. And we as

evaluators have agreed to a set of standards for doing evaluation that ought to

be upheld. Like using multiple indicators, like including stakeholders, like making

our work useful, like focusing on improvement.

In general, what I am saying is that as evaluators we should take to heart

Lee Cronbach’s call for evaluators to be educators too. To educate others about

how evaluation can be done, how it can serve multiple interests and purposes

and to resist doing as we are told by those who hold the purse strings, sit in

elected offices, or occupy positions of power.

But how evaluation is done is not just a matter of local practice; it is also a

matter of public policy and, as such, evaluators’ participation in the public

discourse about the matter is an obligation. As professionals, and even though

we may not always speak with one voice, we must create opportunities to

participate in discourses that define the nature of accountability. These are

conversations about evaluation policy—the rules or principles that a group or

22

organization or government uses to guide decisions and actions when doing

evaluation.

The American Evaluation Association offers some examples of this

engagement in discourse about evaluation policy in their statements on high

stakes testing and on educational accountability. This is not however always a

straightforward matter (as illustrated by the dissension within AEA over its

response to the US Department of Education’s endorsement of randomized

clinical trials as the gold standard for educational evaluation). This kind of

engagement will inevitably create conflict and discomfort. But, we need to learn

to embrace the conflict within our own professional community and stay

engaged. AEA has moved away from the strategy of making public statements

that might influence evaluation policy, but the interest in participating in the

discourse has not diminished. The forms of engagement are evolving.

As evaluators we might grumble about our practice within a regulatory

accountability framework defined by the new public management, fearfully

disengaging from the politics of accountability. And if all we do is grumble and

comply, then history will show us to be a weak profession with little commitment

to its own lofty guiding principles.

Instead, I encourage all evaluators to own our profession’s rhetoric of

working in the name of the public good, I encourage all evaluators to excite

debate and dialogue within our own professional community and in those

23

communities in which we do evaluation. I encourage all evaluators to embrace

and participate in the politics of accountability.


Recommended