+ All Categories
Home > Documents > Measuring Accountability for Results in Interagency Collaboratives

Measuring Accountability for Results in Interagency Collaboratives

Date post: 23-Jan-2017
Category:
Upload: stephen-page
View: 215 times
Download: 3 times
Share this document with a friend
17
Measuring Accountability for Results in Interagency Collaboratives Author(s): Stephen Page Source: Public Administration Review, Vol. 64, No. 5 (Sep. - Oct., 2004), pp. 591-606 Published by: Wiley on behalf of the American Society for Public Administration Stable URL: http://www.jstor.org/stable/3542540 . Accessed: 16/06/2014 08:32 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve and extend access to Public Administration Review. http://www.jstor.org This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AM All use subject to JSTOR Terms and Conditions
Transcript

Measuring Accountability for Results in Interagency CollaborativesAuthor(s): Stephen PageSource: Public Administration Review, Vol. 64, No. 5 (Sep. - Oct., 2004), pp. 591-606Published by: Wiley on behalf of the American Society for Public AdministrationStable URL: http://www.jstor.org/stable/3542540 .

Accessed: 16/06/2014 08:32

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve andextend access to Public Administration Review.

http://www.jstor.org

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

Stephen Page University of Washington

Measuring Accountability for Results in

Interagency Collaboratives

This article examines the intersection of two types of innovations that are increasingly common in public administration-accountability for results and interagency collaboration. Recent scholar- ship suggests four approaches that collaborators can use to increase their accountability for re- sults. The article proposes measures of these four approaches to assess a collaborative's capacity for accountability, and uses them to compare the accountability of human services collaboratives in 10 states. The findings indicate that collaboratives tend to use the four approaches together with one another. In combination, the various approaches may help collaborators manage their stake holders' expectations about their actions and accomplishments. Further research is needed to determine whether a collaborative's capacity for accountability for results actually correlates with improvements in outcomes.

Many of today's public policy goals, from welfare re- form to environmental protection to national defense, re- quire collaboration among public and nongovernmental agencies. Because they work across agency and program lines, collaborators benefit from having the discretion to solve public problems in creative ways-for example, by sharing critical information and resources with one another (Bardach 1998; O'Toole 1997). In exercising discretion, they may run afoul of regulations and controls designed to ensure their actions are accountable to their overseers, in- cluding elected officials and citizens. Thus, providing flex- ibility to permit collaboration while also ensuring account- ability is a critical challenge for the field of public admin- istration (Wondolleck and Yaffee 2000; Behn 2001b).

While collaboration offers the ability to increase gov- ernment's responsiveness to diverse circumstances and changing conditions, it lends itself more to certain kinds of accountability relationships than to others. Because col- laborators thrive on discretion, strong hierarchical or legal controls are likely to make their work considerably more difficult. Accountability relationships that rely on profes- sional authority and political responsiveness can substi- tute, in part, for strong controls (Radin and Romzek 1996), but a certain degree of hierarchical or legal control remains desirable for many public tasks (Romzek and Dubnick 1987). In the delivery of human services, for example, due process rights for clients are a critical public value that is

guaranteed by law and upheld by requiring service provid- ers to follow certain procedures in working with clients. The discretion inherent in the practice of collaboration threatens these kinds of legal and hierarchical accountabil- ity relationships. In keeping with the logic of the New Pub- lic Management movement (Kettl 2000), many collabora- tors have proposed instead that their overseers hold them accountable for producing results rather than for comply- ing with procedures (Behn 200 b).

Holding collaborators accountable for results is chal- lenging in practice, however, for several reasons. First, rea- sonable people may disagree about which results to mea- sure, and appropriate data can be difficult to track. Second, some collaborators may resist being held accountable for results, fearing they will not perform well-either because they doubt their own capacity, or because circumstances beyond their control may influence the results they are asked to achieve. Third, measuring particular results may focus implementation efforts so narrowly that desirable

policy goals that are harder to measure are displaced-for instance, when teachers "teach to the test" and neglect other important educational aims. Fourth, accountability for re-

Stephen Page is an assistant professor at the Daniel J. Evans School of Public Affairs at the University of Washington. He has also served as a consultant to state and local governments and nonprofit organizations that serve children and families. His research focuses on the interorganizational design and management of social and health policies. E-mail: [email protected].

Measuring Accountability for Results in Interagency Collaboratives 591

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

suits requires not just new management tools, but a "com- plete mental reorientation" on the part of public managers, their authorizers and stakeholders, their staff and collabo- rators, and citizens themselves (Behn 200 la). Finally, hold- ing a collaborative accountable for results requires clarifi- cation as to who should be accountable to whom and for what results-both among the collaborative partners them- selves, and between the collaborative and its external over- seers and stakeholders (Bardach and Lesser 1996).

Despite these challenges, many policy makers have embraced the idea of accountability for results for collaboratives as well as single agencies, as the federal Government Performance and Results Act and recent state initiatives illustrate (Radin 1998; Liner et al. 2001). Some collaborators have even developed ways to demonstrate their joint capacity to achieve results and to induce each other to fulfill their mutual obligations. Recent scholar- ship suggests that collaborators can cultivate "partner ac- countability" (Bardach and Lesser 1996), "learning through experimentation" (Weber 1999), and "360-degree account- ability for performance" (Behn 2001b).

To help assess and advance these emerging develop- ments, this article explores ways to measure a collab- orative's accountability for results.' The first section scans the public administration literature to identify four ap- proaches that collaborators can use to enhance their ca- pacity for accountability. The second section proposes measures for each approach that practitioners and schol- ars can use to assess that capacity. The article goes on to use these measures to compare the accountability of collaboratives in 10 states that have promoted interagency collaboration to improve human services. After introduc- ing those states' human services reforms and describing my research methods, I use the measures of accountability developed in the article to compare the accountability of the 10 states' human services collaboratives. I then ana- lyze the findings from this comparison and conclude with a discussion of the implications for research and practice.

Accountability for Results and Interagency Collaboratives

Recent scholarship in public administration distin- guishes four types of accountability relationships that any public agency or program-including a collaborative- may be subject to: legal, hierarchical, political, and pro- fessional. These relationships differ based on the source (internal or external) and degree (high or low) of control they involve. Legal accountability relationships entail a high degree of control that is external to an agency or program, such as constraints created by laws that mandate the agency or program to undertake certain activities. Hierarchical relationships entail a high degree of internal control, such

as the guidance provided by administrative rules. Political relationships entail a low degree of external control, such as the pressures on an agency or program that come from outside stakeholders. Finally, professional relationships entail a low degree of internal control, as may be fostered by the peer expectations of colleagues on the job (Romzek and Dubnick 1987; Radin and Romzek 1996).

Every agency or program features a mix of these differ- ent types of accountability relationships, but some types are better suited to certain types of programs or agencies than others (Romzek and Dubnick 1987). Most interagency collaboratives, for example, lack hierarchy and formal au- thority (Bardach 1998), and individual "accountability holders" and "accountability holdees" (Behn 2001b) are difficult to pinpoint.2 Because collaborators often need dis- cretion to do their work, collaboratives are likely to be more effective under low-control accountability relationships based on professional norms and politics than under high- control relationships that employ legal or hierarchical au- thority (Radin and Romzek 1996). An accountable collabo- rative, therefore, must work to uphold key political rela- tionships and professional norms. To do so, it needs to manage expectations and respond to the demands of its internal and external stakeholders (Romzek 1996).

Given the heightened concerns of citizens and elected officials about the performance of public programs (Pe- ters 1996; Osborne and Plastrik 1997), these expectations and demands are likely to center around the achievement of broad results that matter to the general public. In grant- ing funding and authority to an agency or a collaborative, for example, administrative and elected overseers are likely to insist that it clarify its contributions to public life and document its progress toward those contributions. By im- proving the quality of information that is available to the public about its performance, an agency or collaborative can render itself more accountable to its overseers, as well as to the consumers of its services and products (Gormley and Weimer 1999). Any accountable agency or collabora- tive, therefore, will need to develop ways to measure and track the results it produces.

The literature on performance management, however, recommends that organizations do more than simply col- lect data about their performance. It also indicates they need to build robust processes for understanding and us- ing data strategically to improve their performance. An accountable collaborative, therefore, needs a measurement system to document its results and how those results change over time. It also needs a "managing for results system" that links the data it measures to specific actors and inter- ventions, that provides critical performance information to its stakeholders, and that uses the information to im-

prove its operations in the future (Hatry 1999; Ingraham and Moynihan 2001).

592 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

In combination, the public administration research on accountability and performance management suggests that, to be accountable for results, collaboratives need strong relationships with key political and professional constitu- encies as well as the capacity to measure results and use that information strategically to improve performance. Using Bardach's concept of interagency collaborative ca- pacity (1998, 2001), we might think of a collaborative's capacity for accountability as resting on key "platforms," each of which rests, in turn, on other collaborative capaci- ties. The collaborators themselves construct these plat- forms, aided or hindered by opportunities and exigencies in their authorizing environment, and each contributes in different measure to the collaborative's capacity for ac- countability. Accountability platforms, finally, contribute to overall management capacity, which itself is a platform for strong performance (Ingraham and Donohue 2000).

This formulation raises critical questions: What core platforms of accountability enable a collaborative to man- age expectations and respond to the demands of its exter- nal and internal stakeholders, measure the results it pro- duces, and use information about results to improve its performance? Are some platforms of accountability more important than others for collaborative performance? An- swering these questions requires clear definitions of the platforms and practical ways to measure them.

Accountability for Results in Collaboratives: Platforms and Measures

The literature reviewed in the previous section suggests that a collaborative's capacity for accountability rests on four platforms: * External authorization-the capacity to manage

expectations and to respond to the demands of political stakeholders

* Internal inclusion-the capacity to manage expectations and to respond to the demands of professional colleagues and collaborative partners

* Results nieasurenment-the capacity to identify the collaborative's mission, goals, and indicators of progress, and to track data that document changes in progress over time

* Managing for results-the capacity to use data about results strategically to assess progress and to improve policies and operations in the future.3 This section defines these platforms, explains how each

can enhance collaborators' accountability for results, and proposes different ways to measure their presence on the ground. Table 1 summarizes the measures of each plat- form. While the measures are exploratory and require fur- ther investigation and validation, the analysis that follows demonstrates how they provide a useful conceptual base

Table 1 Platforms of Accountability for Results: Measures for Interagency Collaboratives

Platform Measures External Content and consistency of legislation, executive authorization orders, and administrative rules affecting the

collaborative. Increases in collaborative discretion, resources, and support secured through performance improvements and outreach.

Internal Formal representation of diverse stake holders in the inclusion membership.

Consideration of alternative view points and voices in collaborative deliberations.

Strong working relationships among collaborating agencies.

Results Extent and sophistication of data tracking, including: measurement * Alignment among mission, goals, and indicators

* Broad consultation in designing goals and indicators * Measurement of a balanced array of results that

reflect stretch goals * Tight links between indicators and the actors

responsible for achieving them * Cross-year comparisons of results achieved.

Managing Extent of publicity of results achieved, coupled with for results official praise or shame appropriate to collaborative

achievements.

Systematic use of measurement data to adjust operations to enhance future performance. Structure and enforcement of rewards and sanctions for results achieved, affecting collaboratives' finances, discretion, technical assistance, and external oversight.

for comparing the accountability of collaboratives.

External Authorization Studies of public managers who successfully pursue

policy or management innovations indicate they manage expectations and respond to the demands of their stake- holders by publicly declaring their aims and actively seek-

ing support for their efforts. By including a variety of stake- holders in the planning and implementation of an innova- tion, entrepreneurial managers can help their overseers understand and endorse their endeavors and build a con-

stituency for their agenda within the organization and

among the citizenry (Moore 1995; Kearns 1996). Involv-

ing a range of affected interests in deliberations about policy development can improve the quality of decisions and build a broad base of support for the policies that eventually emerge (Reich 1990; Roberts 1997).

Collaborators can use similar techniques to secure for- mal authorization and informal support for their work from the public, their overseers, and other stakeholders, including potential partners and peer collaboratives. Au- thorization may take the form of "charter agency" status or a "performance partnership" agreement, in which the collaborative receives more discretion when it demon- strates improvements in results (Osborne and Plastrik 1997; Behn 2001b). Support may consist of active com- mitments by overseers and stakeholders to sustain the

Measuring Accountability for Results in Interagency Collaboratives 593

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

political legitimacy and resources that a collaborative needs to operate effectively.

One measure of a collaborative's external authorization is the nature and consistency of the legislation, executive orders, and administrative rules that affect it. For instance, do the statutes grant the collaborative legitimate public auspices? Do they provide the collaborative with clear purposes, flexibility in pursuing them, and opportunities to take on new responsibilities over time? If multiple stat- utes exist, do they build on one another over time, or are they contradictory?

Another measure is the extent to which collaborators secure recognition, support, and new partners through con- sistent improvements in performance coupled with attempts to garer additional discretion, resources, and participants (Behn 2001b; Bardach 1998). For example, do overseers allow successful collaboratives to expand the scope of their activities? Can collaboratives request regulatory waivers and flexible funding to support their efforts?

Internal Inclusion

Many effective organizational leaders manage expecta- tions and respond to the demands of the professionals and other staff in their agencies by laying out a clear vision and concrete goals (Behn 1991; Moore 1995). Some also use participatory decision processes, consultation, empow- erment, learning teams, and other strategies to engage their staff in helping to govern and improve their organizations (Senge 1990; Bolman and Deal 1997). Collaboratives in particular can benefit from such inclusive strategies be- cause collaborative partners often bring with them politi- cal, financial, and in-kind resources, as well as ideas and expertise. By including a wide variety of participants in their planning and implementation efforts, collaboratives can supplement accountability to rules with accountabil- ity to multiple publics (Feldman and Khademian 2000).

A collaborative with many partners, though, faces se- vere challenges to collective action; making decisions and coordinating activities can become extremely complicated (Bardach 1998; Scharpf 1993). To overcome these chal- lenges, the members of a collaborative can enforce "part- ner accountability" by inducing one another to contribute constructively to planning and implementation through the use of norms, voice, and threats to exit or to withhold key resources. When inclusion and partner accountability work effectively, they can reduce conflicts and the cost of coor- dinating decisions and actions among collaborators (Bard- ach and Lesser 1996).

To achieve these benefits, collaborators must be able to communicate quickly and easily with one another. Just as multiple communication channels are essential to the op- erating capacity of a hierarchical organization (Downs 1967), so collaborators need to develop ties to one another

that enable them to transact business smoothly inside and outside their formal meetings. Knowing whom to contact in other partner organizations for ideas or information en- hances collaborators' ability to solve problems, and hence their capacity to improve results.

Measures of the various dimensions of internal inclu- sion may assess the representation of diverse stakeholders and alternative perspectives in a collaborative's member- ship, as well as how often the collaborators heed alterna- tive views or consider underrepresented voices during their deliberations.4 Another important measure might be the quality of communication among the collaborating orga- nizations.

Results Measurement As the performance-management literature suggests, an

organization seeking to demonstrate its accountability will benefit from being able to track data about its progress toward its mission, and by making those data available to its external stakeholders, customers, and internal staff (Kaplan and Norton 1992; Gormley and Weimer 1999). Collaboratives can use the same approach to accountabil- ity-tracking the results they produce to allay fears they are exercising discretion for unintended purposes (Bardach and Lesser 1996).

Because measurement systems help to shape the defini- tion and public understanding of an organization's or a collaborative's performance, they must be designed care- fully, in consultation with staff, partners, and stakehold- ers. Research suggests that effective measurement systems feature quantifiable goals that reflect medium-term out- puts and outcomes of critical elements of missions and stra- tegic plans; are broken down into lower-level indicators; and are linked to a responsible actor or actors (Ingra-ham and Moynihan 2001; Hatry 1999).

Measurement may, of course, produce undesirable be- havior if staff work exclusively toward performance tar- gets at the expense of larger policy aims. For this reason, measuring a variety of results in different ways is often desirable (Gormley and Weimer 1999). Long lists of per- formance measures, however, may complicate data track-

ing and interpretation, so grouping measures into clusters or prioritizing key measures for immediate action is often helpful. Balanced scorecards, for example, track organi- zational progress on multiple dimensions of performance such as financial well-being, customer service, internal systems, and innovation and learning (Kaplan and Norton 1992).

Tracking results consistently over time can provide col- laborators and their authorizers with valuable baseline in- formation, and it can permit the design of ambitious but realistic targets (stretch goals) that take into account back-

ground and environmental conditions. Establishing target

594 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

goals without reference to past performance and environ- mental conditions, by contrast, may make them too easy or too difficult to achieve and limit their legitimacy in the eyes of staff and stakeholders (Moynihan 2001). In inhos- pitable environments, for example, overall performance may be expected to decline severely without diligent ef- fort, so even ambitious targets may need to be set below previous levels of performance (Friedman 1996; Hatry 1999; Moynihan 2001).

Measures of a collaborative's capacity to measure re- sults may assess the extent and sophistication of data track- ing using the criteria for effective measurement of results just outlined: alignment among misions, goals, and indi- cators; broad consultation in the selection of goals and in- dicators; measurement of a balanced array of results; tight links between measures and the actors responsible for achieving them; and the ability to compare data from one year to the next.

Managing for Results An organization's results-measurement system is likely

to have only a limited impact on its performance and ac- countability unless stakeholders, managers, staff, and cus- tomers are able to use the data it tracks to improve their decision making. In particular, "The effective use of infor- mation [about results] depends on the quality of its distri- bution, and participants' incentives to use it" (Ingraham and Moynihan 2001, 321).

When data about the results an organization or collabo- rative has achieved are broadly available, stakeholders can make informed decisions about whether and how to con- tinue to authorize and support it, staff are better able to understand how managers view their performance, and customers have a reasonable basis for deciding whether to continue to patronize the organization. Perhaps most im- portantly, managers and staff know the outcomes they are producing are open to public scrutiny, and continued po- litical and financial support for their efforts depends at least partly on their ability to demonstrate improvement. Trans- parent information about results, then, can motivate per- formance and improve the quality of policy, management, and purchasing decisions, even in the absence of formal incentive systems (Gormley and Weimer 1999).

When the achievement of indicators in a results-mea- surement system is directly linked to specific actors, the likelihood of improvement in performance increases fur- ther. Managers can gain a clearer understanding of which divisions, work processes, or staff are performing effec- tively and which require attention or intervention, and they can adjust responsibilities and operations accordingly (Hatry 1999; Ingraham and Moynihan 2001). The partners and overseers of a collaborative can assess each other's performance more readily and exchange information about

challenges and promising practices, thereby increasing the collaborative's capacity to learn and improve performance continuously (Bardach and Lesser 1996; Bardach 1998).

Formal rewards for actors who achieve their target goals can create even more concrete motivation for managers and staff to strive for improvement (Behn 1991; Osborne and Plastrik 1997). A common way to create formal in- centives is through performance contracts for top manag- ers that specify targets for improvement and offer finan- cial rewards for their achievement (Kettl 2000). Managers, in turn, can design strategies for identifying problematic and successful performance within their agencies and pe- nalize and reward their staff accordingly. Following this logic, the overseers of a collaborative might create a per- formance contract for the collaborative, and the collabora- tors could design similar-albeit elaborate-systems for allocating and sharing consequences for weak and strong performance among themselves (Behn 2001b).

In light of this discussion, one measure of a collabora- tive's capacity to manage for results may be the extent to which results are published, coupled with praise and nurturance for improvements in performance or shame and humiliation for poor performance (Behn 1991; Buntin 1999). Another key measure may be how effectively the collaborators use measurement data to adjust their strate-

gies and operations to enhance their joint performance. Additional measures may focus on the establishment of financial incentives for improved results and the imposi- tion of financial penalties for failure to improve (for in- stance, the loss of a contract to provide public services). Still other measures might track the availability of softer incentives, such as customized technical assistance or grants of increased discretion and authority from overseers, as well as softer sanctions, such as closer oversight.

State-Sponsored Human Services Collaboratives

Examining how these four platforms have emerged in

specific policy domains can illuminate how they might contribute in practice to collaboratives' capacity for ac- countability. The article, therefore, turns to look at emerg- ing state-sponsored human services reforms that encour- age interagency collaboration as well as accountability for results.

Since the late 1980s, a number of states have sought to

improve the effectiveness of human services by fostering collaboration among public and nongovernmental agen- cies that offer children's mental health, child welfare, ju- venile justice, child care, job training and placement, and other services for children and families (Knitzer 1997; Schorr 1997; Waldfogel 1997). Traditionally, state and lo- cal human services systems have suffered from fragmented,

Measuring Accountability for Results in Interagency Collaboratives 595

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

disjointed programming; uncoordinated services offered by narrowly specialized providers; and rigid organizational structures and procedures. All of these features hinder re- sponsiveness to the needs of individual families (Schorr 1988; Gardner 1994; Yessian 1995). By co-locating staff, creating teams that represent multiple programs, and en- couraging staff to work with families as partners rather than as clients, however, a number of pioneering states have made services more comprehensive and accessible for fami- lies in recent years (O'Looney 1996; Ragan 2002).

To support these changes at the front lines, the innovat- ing states have also made changes in programs and poli- cies. Specifically, they have authorized local collaboratives of public agency directors, nongovernmental service pro- viders, service consumers, civic leaders, and other com- munity members to design and implement comprehensive service plans that reflect community priorities. To support these collaboratives' efforts, interagency teams of state officials offer flexible funding, regulatory relief, and tech- nical assistance. In exchange, the local collaboratives com- mit to measuring and improving outcomes for children and families (for instance, the percentages of parents working, children entering school ready to learn, youth graduating from high school, etc.).

These reforms reflect New Public Management prin- ciples in that they grant local collaborators the discretion to craft flexible interventions for families while holding the collaborators accountable for the results they achieve. In the language favored by some human services analysts,

states are encouraging collaboration around clients, pro- grams, and policies-while organizational structures re- main more or less the same (Morrill 1996; Waldfogel 1997).5

Research Methods The remainder of the article uses the measures previ-

ously identified to compare local human services collabora- tives' capacity for accountability in a sample of 10 states over the past decade-Georgia, Iowa, Maryland, Minne- sota, Missouri, North Carolina, Ohio, Oregon, Vermont, and Washington. Table 2 identifies these states' collabora- tive initiatives.

While a number of other states have sponsored collabo- rative reforms to improve human services in recent years, the initiatives in these 10 states are especially ambitious. They seek to change service delivery, governance, financ- ing, and accountability across a variety of agencies and

programs that serve children and families. They have ex- isted for close to a decade or more, which permits a devel- opmental assessment of their efforts to develop local collaboratives' capacity for accountability over time.6

I have conducted ongoing research on the design and implementation of these states' reforms since 1995. I elected to study them based on the results of a national

survey of state initiatives for children and families, in which officials in these states articulated elaborate visions for collaborative reforms of their systems of human services

Table 2 Selected State-Sponsored Human Services Collaboratives

State Initiative(s)

Georgia Family connection/ community partnerships

Iowa 1. Decategorization 2. Community empowerment boards

Maryland Community partnerships (formerly local management boards)

Minnesota Family service collaboratives

Missouri Caring communities/ community partnerships

North Carolina Smart Start

Ohio Family and Children First

Oregon 1. Commissions on Children and Families 2. DHS services integration projects

Vermont Community partnerships/ success by six

Washington Community health and safety networks

Origin(s)

1991 legislation 1. 1987 legislation 2. 1998 legislation

1989 legislation 1993 legislation 1989 state agency discussions; 1993 executive order

1993 legislation 1992 executive order; various legislative authorizations

1. 1994 legislation 2. 1991 state agency initiative

1990 state agency discussions; various subsequent legislative authorizations

1989 state agency discussions; various subsequent legislative authorizations

Target population(s) for collaboration

All children and families

1. "Deep end" children and families 2. Young children and families

Began with "deep end" children and families; now focuses on all children and families

All children and families

Children and families in disadvantaged school neighborhoods

Young children and families

All children and families

1. All children and families 2. DHS clients ("deep end" children and families; elderly and disabled adults)

All children and families

All children and families

596 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

(Knitzer and Page 1996). Since its inception, my research has entailed more than 170 semistructured interviews with the practitioners and analysts responsible for the initiatives in the 10 states in the sample. My informants in each state included the following: * Governor's policy advisor on issues related to children

and families * Commissioners, assistant commissioners, or

collaborative liaisons in key state agencies involved in the state's collaborative initiative (social services, education, public health, and sometimes labor and economic development)

* Staff of the state interagency team * Staff of local collaboratives in three to five

communities * Participants from key collaborating agencies in those

same communities. I also gained insights into individual states' initiatives

as well as a comparative perspective on them by interview- ing program directors and researchers in intermediary or- ganizations that help states and communities foster col- laboration for children and families (such as the Annie E. Casey Foundation, Center for Child and Family Policy, Center for the Study of Social Policy, Council of Gover- nors' Policy Advisors at the National Governors Associa- tion, Finance Project, Institute for Educational Leadership, National Civic League's Program for Community Prob- lem Solving, National Center for Service Integration, and the David and Lucile Packard Foundation). To supplement the interviews, I analyzed the contents of hundreds of plan- ning documents, process and outcome evaluations, and other reports from the states, their consultants, and the in- termediary organizations that support their efforts.7

The interviews and content analyses examined the aims and accomplishments of each state's reforms. Specifically, they focused on the following: * The origins and focus of collaboration for children and

families in the state-Where did the idea come from? What services, populations, and agencies are included? What do the participants hope to accomplish through collaboration?

* The institutional architecture of collaboration-Who participates? Under what ground rules? How much authority do the local collaboratives have over programming and funding decisions? How much guidance and oversight does the state provide? How have these elements changed over time?

* The processes used to build and sustain this architecture-Which participants, communities, and provisions emerged first? When, how, and why did others join them?

* The overall capacity of the collaborative initiative- How robust is it in terms of participants' commitment

and institutional staying power? What changes in services and outcomes has it achieved? Questions related to the measures of external authorization, internal inclusion, results measurement, and managing for results outlined previously and in table 1. Using data from the interviews and content analyses, I

developed indicators and criteria of accountability for re- sults and used them to construct preliminary rankings of the collaboratives in the sample states. I shared these rankings with five experienced analysts who have tracked the states' initiatives from the beginning and then revised the indica- tors, criteria, and rankings based on their comments.

Accountability in Human Services Collaboratives: Indicators and Findings

To measure the development of the four accountability platforms identified in the 10 sample states, this section proposes indicators that capture the presence of each plat- form in human services collaboratives. The aim here is to enable outside analysts who cannot spend extensive time observing human services collaborators in action to assess and compare different collaboratives' capacities for ac- countability. The list of indicators for each platform is fol- lowed by a description of how those indicators cluster in the sample states, permitting me to rank each state's con- struction of each platform on a high/medium/low scale. Table 3 summarizes the indicators, the criteria for ranking the states and the rankings themselves.

External Authorization A human services collaborative enjoys strong external

authorization when public policies clearly outline the types of links that are desired among the partners (Yessian 1995). External authorization is strengthened when the collabo- rative partners and their overseers actively support the collaborative's undertakings. It can also increase over time as the partners and their overseers gain experience work- ing with one another by sharing information, exchanging views, and solving problems jointly (Kagan et al. 1995; Yessian 1995; Bardach and Lesser 1996). State-sponsored human services collaboratives, therefore, are likely to be more accountable for results if the state takes the follow-

ing actions: 1. Develops internally consistent and supportive statutory

frameworks that designate a single local collaborative in each community, as well as a core state team to set policy directions, oversee local collaborative efforts, and coordinate state agency activities and resources to sup- port them

2. Expands the scale and scope of collaboration over time (for instance, by authorizing a few local collaboratives

Measuring Accountability for Results in Interagency Collaboratives 597

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

Table 3 State-Sponsored Human Services Collaboratives: Indicators and Rankings of the Construction of

Accountability Platforms

Indicators for human services collaboratives

1. Consistent and supportive statutory framework for interagency collaboration at the state and local levels.

2. Expansion of the scale and scope of state and local collaboration over time.

3. Local collaboratives can request fiscal, regulatory, and other changes in state policies and operations.

4. State officials seriously consider and adopt justified local requests.

1. Formal partnership at the state level among a broad range of public agencies, representatives of local collaboratives, and civic leaders.

2. Local collaboratives include public agencies, non- governmental service providers, service recipients, business representatives, and community organizations.

3. Collaborators have strong working relationships across agencies.

1. State and local collaboratives have agreed on a short list of core results and measurable indicators that reflect multiple dimensions of the well-being of children and families.

2. Regular reports track changes in indicators of child and family well-being.

3. Local collaboratives prioritize particular indicators for improvement in light of recent trends, and devise joint plans to achieve them.

1. Data linked to local collaboratives are publicized and compared across the state.

2. State officials use data to identify promising local practices, and offer customized assistance to improve local collaboratives' performance.

3. Share-in-savings reinvestment opportunities are available to local collaboratives.

4. Partnership agreements permit local collaboratives to assume increased responsibility if they improve performance.

Ranking criteria

High: All four indicators are present (GA, VT) Medium: Two or three of the indicators are present (IA, MD, MO, NC, OH) Low: One of the indicators is present (MN, OR, WA)

High: All three indicators are present (MO, VT) Medium: Two of the indicators are present (GA, IA, MD, NC, OH, OR) Low: One or none of the indicators is present (MN, WA)

High: All three indicators are present (GA, MD, MO, VT) Medium: Indicators 2 and 3 are present; reports focus on specific indicators (IA, NC, OR) Low: Only one indicator is present (MN, OH, WA)

High: Two or three of the indicators are present (GA, IA, MD, VT) Medium: Only indicator 2 is present (MO, NC, OH, OR) Low: No indicators are present (MN, WA)

at first and then expanding to new sites over time, and by gradually adding new partners, resources, service interventions, and target populations to their efforts)

3. Allows local collaboratives to request waivers of fund- ing requirements, regulations, and other operating pro- cedures that constrain their ability to achieve their aims

4. Seriously considers and adopts those waiver requests that appear justified. Two of the states in the sample-Georgia and Vermont-

feature all four of these indicators. These states have des- ignated a single local collaborative in each community that is accountable for improving results, and they have estab- lished a team at the state level that can negotiate and ex- ecute the terms of an accountability agreement with the local collaboratives. Since Georgia and Vermont began promoting community collaboration, the number of local collaboratives has increased to cover the entire state, and the collaboratives have expanded their work to include new

partners, interventions, and target populations. Both states also permit, consider, and adopt local requests for waivers and informal changes in state policies and practices.8 As a

result, these two states rank high in their construction of the platform of external authorization.

Five states-Iowa, Maryland, Missouri, North Carolina, and Ohio-each feature two or three of the four indicators of external authorization. Iowa permits and actively re-

sponds to local waiver requests (indicators 3 and 4) and has gradually expanded the scale and scope of collabora- tion (indicator 2), but its statutory framework regarding collaboration (indicator 1) is ambiguous: State policy au- thorizes multiple local collaboratives to serve different

populations of children and families, and local efforts to link the different collaboratives are still unfolding. The collaborative initiatives in Maryland and Ohio enjoy clear and consistent statutory frameworks, have gradually ex-

panded their scale and scope, and formally permit local waivers. Both states have received and adopted relatively few requests, however, suggesting that communication and

cooperation between the state and local levels may not be

strong. Missouri permits waivers and has gradually ex-

panded the number of local collaboratives, but its statu-

tory framework is established only by executive order, and

598 Public Administration Review * September/October 2004, Vol. 64, No. 5

Platform

External authorization

Internal inclusion

Results measurement

Managing for results

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

several attempts to codify it in legislation have failed. North Carolina permits local waivers and has codified collabora- tion in clear and consistent legislation, but the scope of its collaborative efforts has not expanded beyond the initial target population-young children ages birth to six years and their families. For these reasons, these five states re- ceive a medium ranking for their construction of external authorization.

The remaining three states in the sample-Minnesota, Oregon, and Washington-feature only one of the four in- dicators of external authorization. The collaborative ini- tiatives in these states have expanded in only limited ways since their inception, and a variety of statutes authorize separate collaboratives to serve children and families in ways that overlap and sometimes conflict. While all three states formally permit local requests for waivers, they re- ceive and adopt few, if any.9 As a result, these states rank low in their construction of external authorization.

Internal Inclusion An internally inclusive human services collaborative is

likely to be characterized by a broad-based membership and a commitment among the partners to communicate and work together effectively (Bardach and Lesser 1996; Bardach 1998). These characteristics can enhance efforts to track and improve results-for example, by helping col- laborators agree on shared operational goals and measur- able results that are important to a wide range of stake- holders (Weiner 1990; Agranoff 1991; Yessian 1995). Specifically, state-sponsored human services collaboratives are likely to be more accountable for results if the follow- ing occur: 1. A formal state-level partnership exists that includes a

wide array of agencies that serve (or affect the well- being of) children and families, along with civic leaders and representatives from the local collaboratives.

2. Local collaboratives include a broad range of partici- pants, including officials from multiple public agencies, representatives from nongovernmental human services organizations, service recipients, and business and community leaders.

3. The collaborating agencies have created strong working relationships with one another.10 Only two of the states in the sample-Missouri and

Vermont-feature all three of these indicators. Both have established broadly inclusive state teams and local collaboratives whose members enjoy strong working rela- tionships. As a consequence, they rank high in their con- struction of the platform of internal inclusion.

Georgia, Iowa, Maryland, North Carolina, Ohio, and Oregon each feature two of the three indicators of inter- nal inclusion, though they vary in terms of which ones.

Maryland, Ohio, and Oregon have broad-based state teams and local collaboratives, but they face difficulties coordi- nating action across agencies at the local and especially the state levels because of interagency tensions and diver- gent priorities. Georgia and North Carolina enjoy inclu- sive local collaboratives with strong working relationships, offset by state governance arrangements that rely prima- rily on a single agency and a public-private partnership, with limited collaboration across agencies at the state level. Iowa enjoys reasonably solid working relationships among its partner agencies, but it is still working to coordinate the memberships and activities of two distinct collabora- tive initiatives that target different populations of children and families. Because of the mix of characteristics of in- ternal inclusion present in this group of states, they all receive a medium ranking for their construction of this platform.

The remaining two states-Minnesota and Washing- ton-feature only one of the indicators of internal inclu- sion. Minnesota's state governance arrangements rely on a single agency, and the inclusiveness and working rela-

tionships of its local collaboratives vary widely. In Wash-

ington, meanwhile, the state and local collaboratives are somewhat inclusive, but the relationships among the par- ticipating public agencies are tenuous at best. These two states rank low in their construction of internal inclusion.

Results Measurement A human services collaborative's capacity to measure

results depends on agreement among the collaborators and their authorizers about what to measure, as well as its abil- ity to track changes in the results and to use the findings to

identify priorities for improvement in the future. Joint

agreement on measurable results can foster common ground among collaborators, enhance the credibility of their initiatives, and enable them to refine their aims and strate- gies as performance data become available (Yessian 1995, 38). Gathering data on client outcomes is especially im-

portant for understanding the performance of a human ser- vices collaborative (Kagan 1993; Yessian 1995). State- sponsored human services collaboratives, therefore, are likely to be more accountable for results if the state has the

following characteristics: 1. The state has reached agreement with the local

collaboratives on a short list of core results and data indicators that reflect multiple dimensions of the well-being of children and families (that is, a bal- anced scorecard of sorts).

2. The state has generated thorough and credible data reports that measure changes in those indicators in local communities over time.

3. The state has asked the local collaboratives to target particular indicators for improvement based on recent

Measuring Accountability for Results in Interagency Collaboratives 599

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

trends in the data and to devise joint projects to achieve them. Of the states in the sample, Georgia, Maryland, Mis-

souri, and Vermont feature all three of these indicators. These states have identified broad outcomes to strive for and corresponding data points to measure; local collabo- rators prioritize specific outcomes and data points for im- provement; and the states regularly report changes in the data at the community level." Hence, these four states rank high in their construction of the platform of results measurement.

Iowa, North Carolina, and Oregon feature the second and third indicators, but their data reports focus on spe- cific indicators that do not directly capture broader core results for children and families. Iowa tracks the number of out-of-home placements of children in foster and insti- tutional care. North Carolina measures young children's immunization rates and readiness to enter kindergarten. Oregon's benchmarks system tracks multiple indicators that measure the well-being of children and families in each county, but the state and the local collaboratives have had difficulty agreeing which indicators capture broad improve- ments in child and family outcomes. These states, there- fore, receive a medium ranking for their construction of results measurement.

Minnesota, Ohio, and Washington feature only one of the indicators of the results measurement platform. Minnesota's "milestones" system functions much like Oregon's system, but state officials and local collabora- tors have made limited efforts to target specific indicators for improvement. Washington's measurement attempts have suffered from disputes about which indicators to measure, as well as weak efforts to track and analyze data. Ohio has only recently agreed on core results to track. As a result, these states rank low in their construction of re- sults measurement.

Managing for Results A human services collaborative's capacity to "manage

for results" depends on its ability to use data from its re- sults-measurement system to adjust its strategies and op- erations to improve performance. A state that sponsors human services collaboratives can foster this capacity by providing the collaboratives with three things-transpar- ent data about their own performance, technical support, and incentives to use the performance data strategically. Transparent data can enable human services collaborators to gauge each other's performance and generate peer pres- sure for improvements (Behn 1991). Technical support from the state may take the form of benchmarking best practices, troubleshooting and other forms of customized assistance, and creating processes for sharing information across local collaboratives (Waldfogel 1997). Incentives

may include rewards for collaboratives that achieve mea- surable improvements-both "hard" financial bonuses and "soft" grants of increased discretion (Behn 2001b). Hence, state-sponsored human services collaboratives are likely to be more accountable for results if the state offers the following to local collaboratives: 1. Extensive publicity of data on results that are directly

linked to individual local collaboratives 2. Benchmarked information about best practices in

collaborative processes and service delivery, and technical assistance on issues of particular local concern

3. Financial incentives in the form of opportunities to reinvest at least some of the money they save by designing more effective services to assist children and families

4. Opportunities to take on more responsibility com- mensurate with their experience and capacity through partnership agreements that grant increased discretion in exchange for commitments to improve results. Four of the states in the sample-Georgia, Iowa, Mary-

land, and Vermont-have developed two or three of these indicators. Georgia and Vermont transparently compare changes in the core results in communities across the state, through statewide reports, at meetings and conferences, on television, and on the Internet. The aim is to encourage local collaborators to ask themselves why their results and indicators look the way they do, and then to figure out how to improve them. Both states also offer benchmarking, tech- nical assistance, and partnership agreements to aid and in- duce the local collaboratives to improve their performance. Iowa and Maryland provide benchmarking and technical assistance, and grant local collaboratives a share of the sav- ings they generate by serving children and families effec- tively. Iowa allows its local collaboratives to keep the funds they save by redirecting children from out-of-state foster care or juvenile institutions to community-based place- ments or intensive family therapy. The collaboratives can use their savings to fund preventive services for children and families. Maryland has negotiated and signed custom- ized partnership agreements with each of its local collaboratives which outline clear performance targets and the financial benefits the collaborative will reap if it achieves the targets. These four states rank high in their construction of the platform of managing for results.

Missouri, North Carolina, Ohio, and Oregon each fea- ture only the second indicator of managing for results- benchmarking and technical assistance-so they receive a medium ranking. Some of these states show signs of de-

veloping some of the other indicators, but they have not yet realized them fully. The Oregon benchmarks reports, for example, offer a vehicle for publicizing transparent data about performance, but they do not link data directly to

600 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

the activities of the local collaboratives. As a result, the local collaboratives have had difficulty determining which of their projects are influencing specific indicators in their particular communities, and attempts to report aggregate results in each county have led to disagreements about the credibility of the data. In North Carolina, meanwhile, the Partnership for Children aggressively publicizes the ag- gregate improvements associated with collaboration across the state in order to increase support for collaboration among policy makers and the public. However, the state has only recently begun to contemplate incentives for im- provements in the performance of individual local collaboratives through devices such as performance con- tracts or transparent comparisons of local data.

The collaborative initiatives in Minnesota and Wash- ington, finally, feature none of the indicators of managing for results. At most, they offer a limited menu of promis- ing practices and technical assistance to their local collaboratives. Therefore, they rank low in their construc- tion of managing for results.

Discussion These findings demonstrate that the measures and indi-

cators developed previously can help analysts to compare the capacity of different interagency collaboratives to be accountable for results. In particular, the findings indicate that analysts can measure a collaborative's capacity to be accountable for results on a continuum. By distinguishing different platforms of accountability and proposing dis- crete indicators of their construction, the measurement scheme developed here permits distinctions among collaboratives based on the extent to which they have con- structed each platform (the high/medium/low rankings). Some discussions of accountability, in contrast, treat it as an all-or-nothing condition, measurable in terms of its pres- ence or absence (Weber 1999). The approach taken here implies instead that collaboratives can achieve different degrees of accountability for results.

If my research sample is any indication, collaboratives differ in the extent to which they have constructed the four accountability platforms. Table 4 lists the states in the sample in descending order of their construction of the four platforms in combination. At the top of table 4 is Vermont, which ranks high in the construction of all of the platforms. At the bottom are Minnesota and Washington, which rank low in their construction of all of the platforms. In between these extremes are states that present a mixed picture, rang- ing from Georgia, which has three high rankings and one medium ranking, to Ohio and Oregon, which have one low ranking and three medium rankings.

Table 4 also shows that the accountability platforms cluster in the states in the sample. States that rank high in

Table 4 State Rankings: Construction of the Four Accountability Platforms State External Internal Results Managing

authorization inclusion measurement for results

Vermont High High High High

Georgia High Medium High High

Maryland Medium Medium High High Missouri Medium High High Medium

Iowa Medium Medium Medium High North Carolina Medium Medium Medium Medium

Ohio Medium Medium Low Medium

Oregon Low Medium Medium Medium

Minnesota Low Low Low Low

Washington Low Low Low Low

the construction of one or two platforms also rank high or medium in the construction of others, and states that rank low in the construction of some platforms also rank medium or low in the construction of others. No states rank high in the construction of some platforms and low in the construction of others.

To illustrate this clustering another way, figure 1 plots the correlations between all of the states' rankings on each platform and their rankings on each of the other platforms. The trend lines demonstrate that positive associations may exist among the construction of the different platforms in the states. The clustering of similar rankings in individual states raises two possibilities: (1) the construction of some accountability platforms may depend on the construction of others, and (2) collaboratives may use multiple plat- forms to hold themselves accountable.

The first possibility-mutual dependence among the

platforms-implies that closer study of the processes by which collaboratives construct the various platforms may reveal important insights. To manage for results by using data to guide strategy and improve performance, for ex- ample, collaborators first need a capacity for results mea- surement. Results measurement, in turn, may rest partly on external authorization and internal inclusion. The clear relationships and strong communication among overseers, stakeholders, and collaborators that characterize the latter two platforms may help collaborators to identify results and indicators to measure and to develop techniques for gathering and interpreting the data. Alternately or in addi- tion, the processes involved in developing the capacity to measure results, such as agreeing on results and indicators to measure, may enhance external authorization and inter- nal inclusion. In any case, given the interdependence of the four accountability platforms, a promising line of in- quiry might examine questions of sequencing and synergy in their construction.

The second possibility-the deliberate use of multiple platforms-goes to the heart of what makes accountabil-

Measuring Accountability for Results in Interagency Collaboratives 601

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

Figure 1 Combined Plot of All States' Ranks on All Platforms

*

c- 0) i

51

*

Low

ity distinctive in collaboratives compared to single agen- cies. Because of the limited impact of legal and hierarchi- cal authority in collaborative settings, collaborators can- not demonstrate accountability by promising to abide strictly by formal laws or rules, because almost by defini- tion they need discretion to carry out their missions (Radin and Romzek 1996; Behn 2001). To be accountable, never- theless, they need some other way to manage the expecta- tions of key internal and external stakeholders (Romzek 1996). In combination, the various platforms of account- ability may provide multiple ways to manage these expec- tations in the absence of formal controls.

In fact, the four platforms may complement one another as tools for managing stakeholders' expectations. The trans- parent information provided by results measurement, for example, may make possible a retrospective accounting of accomplishments in exchange for the prospective grant of discretion that collaborators secure through external au- thorization. Internal inclusion, meanwhile, can help to build the relationships and communication necessary for collabo- rators to manage for results by enabling them to draw on a variety of perspectives and resources in interpreting and using data about their accomplishments to improve their performance.

If the foregoing speculations are on target, collaboratives that adopt all four accountability platforms are likely to be

able to manage their stakeholders' expectations better than those that adopt only some of the platforms. Moreover, the concept of accountability for results implies that develop- ing the capacity for accountability should correlate with improvements in results. If the four platforms examined here capture important dimensions of the capacity for ac- countability, then states with high rankings on all of the platforms should witness improvements in their outcomes, while the low-ranking states should not. Further research on recent changes in the states' outcomes, therefore, is criti- cal. Key questions include the following: Does the con- struction of all four accountability platforms correlate with improvements in outcomes? Or is the construction of some

platforms more closely associated with improved outcomes than the construction of others? Which platforms are most

closely associated with improved outcomes?

Conclusion A constructive approach to accountability in the public

sector needs to focus on real results that are important to the public-not on requiring new processes or bureaucratic "hoop jumping" for their own sake (Behn 2001a). Inter- agency collaboratives are likely to be central to the current trend toward accountability for results because most re- sults of any public significance are beyond the capacity of

602 Public Administration Review * September/October 2004, Vol. 64, No. 5

-I

0

High

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

any single agency, whether public or nongovernmental, to achieve on its own (Behn 2001 b). Attempts, however well-

meaning, to hold individual agencies exclusively account- able for achieving broad outcomes-such as enhancing children's learning and development, protecting the envi- ronment, or improving national security-risk setting those

agencies up for failure. Public expectations of government's ability to achieve such worthy purposes will plummet fur-

ther, and calls may emerge for a retreat to a more tradi- tional, process-based approach to accountability. Thus, collaboratives have a comparative advantage over even the most capable individual agencies in producing results that matter to the public.'2

Given their key role in increasing government's account-

ability for results, collaborators need to develop their ca-

pacity to track and improve results appropriate to their work. The four platforms identified here-external autho- rization, internal inclusion, results measurement, and man-

aging for results-can help them do so. Collaborators who construct the platforms carefully and systematically may be able to claim the discretion they need to do their work

effectively and demonstrate the benefits of their efforts in the form of measurable improvements in results. Without

forethought and consideration, however, attempts to hold collaborators accountable for results may impose new measurement and reporting demands that simply compli- cate their work and detract from their performance.

This article's comparison of the efforts of state-spon- sored human services collaboratives to construct the four

platforms indicates that collaboratives in different states have developed different capacities to be accountable for results. Additional research is necessary to determine whether these differences in capacity correlate with differ- ential improvements in outcomes, or whether the platforms identified here reflect concepts in the literature on account-

ability without necessarily contributing to performance. Further study is also needed to validate the findings in the article and to assess the applicability of the four platforms and their indicators to collaboratives in other policy areas

beyond human services.

Notes

1. I use the phrase "accountability for results" in this article instead of the more common term "performance manage- ment." Accountability for results focuses implementers on tracking and improving a few big-picture outcome measures, such as ensuring that all children succeed in school, rather than an array of narrower performance targets, such as stu- dent dropout rates, test scores, and the like. Clearly, effec- tive performance management is integral to any system of accountability, but the latter seeks to produce broad outcomes that matter to the public, whereas the former often focuses on specific goals that are important to an agency and its im- mediate legislative and administrative authorizers.

2. Here I want to distinguish interagency collaboratives-in which authority is shared-from interorganizational contract-

ing networks, which entail more straightforward accountabil-

ity relationships. Much recent research on interorganizational networks by students of public administration has analyzed the challenges of coordination and accountability in contract-

ing arrangements (Smith and Lipsky 1993; Milward and Provan 1998, 2000). Because the agency that issues a con- tract formally controls its design and can (at least in theory) cancel it, analysts can treat that agency as the accountability holder and the contractors as the accountability holdee (Behn 2001b). In an interagency collaborative, in contrast, no single agency is in charge and the very idea of central leadership is often suspect (Bardach 1998), making collective action and

accountability more problematic (Bardach and Lesser 1996; Weber 1999).

3. As the discussion that follows makes clear, this formulation draws on Moore (1995), Romzek (1996), Bardach and Lesser (1996), Friedman (1996), Hatry (1999), Feldman and Khademian (2000), Ingraham and Donohue (2000), and Ingraham and Moynihan (2001), among others.

4. Human services collaborators, for example, might make it a point to inquire regularly, "[I]f the clients were sitting here ... what concerns do you think they'd be voicing?" rather than offering clients formal representation in their collabo- rative. Similarly, state officials need to consider the views of local actors whose interests their initiatives affect, and administrators need to consider the perspectives of line staff-even when the latter's views are not formally repre- sented in collaborative discussions (Bardach and Lesser 1996, 210-11).

5. Hence, the reforms focus on three of the four domains of interagency links identified in past studies of human ser- vices integration. The four domains are clients, programs, policies, and organizations (Agranoff and Pattakos 1979; Kagan 1993; Yessian 1995).

6. In recent years welfare reform has generated a variety of collaborative relationships among human services agencies in many states, including noteworthy innovations in Oregon and Washington. This study focuses on an earlier genera- tion of initiatives with longer track records that provide a rich basis for analysis and comparison.

Measuring Accountability for Results in Interagency Collaboratives 603

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

7. Space constraints preclude a complete list of sources, but my research has benefited a great deal from the work of the Center for the Study of Social Policy (1995, 1996a, 1996b, 1998), as well as reports from the states in the sample, such as Bloomberg et al. (1996), Bloomberg, Ingram, and Seppanen (1996), Georgia Family Connection (1996), Geor- gia Policy Council (1994, 1996a, 1996b), Jewiss and Hasazi (1999), Kimmich et al. (1995), Potapchuk (1997), Rozansky (1997a, 1997b), and Swanson Gribskov (1995).

8. Local informants expressed frustration that state officials did not respond more readily to their concerns, but they ad- mitted the state had become more responsive since the in- ception of the collaborative reforms. State informants, for their part, noted that some local collaboratives have been slow to request changes, while others have sought changes in rules or practices that are under local rather than state control. Despite these misgivings, Georgia and Vermont (along with Iowa) lead the others in the sample in terms of the number and extent of local requests and the commit- ment and goodwill that state officials have shown in respond- ing to them.

9. Some local collaborators in Washington, for example, re- ported they do not bother to request changes from the state because they do not expect constructive responses.

10. Because the data come from indirect sources (interviews and public documents), it is difficult to assess how regu- larly and actively collaborators consider alternative points of view in their discussions. The interviews suggest that some collaboratives in some states do so some of the time (for instance, Missouri and Vermont). Without detailed and re- peated observations of a reasonable cross-section of col- laborators' interactions, however, I cannot incorporate this criterion into my rankings in a way that applies reliably across all the states in the sample.

11. Collaborators in Vermont, for example, jointly seek to achieve 10 outcomes: families, youth, and individuals are engaged in and contribute to their community's decisions and activities; pregnant women and newborns thrive; infants and children thrive; children are ready for school; children succeed in school; children live in stable, supported fami- lies; youth choose healthy behaviors; youth successfully tran- sition to adulthood; elders and people with disabilities live with dignity and independence in settings they prefer; fami- lies and individuals live in safe and supportive communi- ties. The state tracks each of these outcomes by measuring key data points in each community. The extent to which infants and children thrive, for example, is measured by the infant mortality rate, the rate of injuries of children ages 0- 9 that result in hospitalization, and the child mortality rate

(Murphey 1999). Georgia has a more compact but equally ambitious list of five outcomes and 26 data points that cor-

respond to them (Georgia Policy Council 1994). Maryland and Missouri have similar lists of outcomes and data points, though they have less experience measuring and comparing changes in the data at the community level than do Georgia and Vermont.

12. The comparative advantage that collaboratives have in pro- ducing big-picture results does not mean that individual agencies cannot devise effective performance management schemes that focus staff on critical goals (Osborne and Plastrik 1997; Ingraham and Moynihan 2001). For further clarification of the distinction between accountability for results and performance management, see footnote 1.

References

Agranoff, Robert. 1991. Human Services Integration: Past and Present Challenges in Public Administration. Public Admin- istration Review 51(6): 533-42.

Agranoff, Robert, and Alex Pattakos. 1979. Dimensions of Ser- vices Integration: Service Delivery, Program Linkages, Policy Management, Organizational Structure. Washington, DC: Project SHARE.

Bardach, Eugene. 1998. Getting Agencies to Work Together. Washington, DC: Brookings Institution.

.2001. Developmental Dynamics: Interagency Collabo- ration as an Emergent Phenomenon. Journal of Public Ad- ministration Research and Theory 11(2): 149-64.

Bardach, Eugene, and Cara Lesser. 1996. Accountability in Hu- man Services Collaboratives-For What? And to Whom? Journal of Public Administration Research and Theory 6(2): 197-224.

Behn, Robert. 1991. Leadership Counts. Cambridge, MA: Harvard University Press.

. 2001a. The Psychological Barriers to Performance Management. Paper presented at the Sixth National Public Management Research Conference, October 13, Bloomington, IN.

2001b. Rethinking Democratic Accountability. Wash-

ington, DC: Brookings Institution. Bloomberg, Laura, Jeanette Colby, Deb Ingram, and Pat

Seppanen. 1996. Minnesota's Family Services Collaboratives: Barriers to Collaboration and Service Integration. Minne-

apolis, MN: Center for Applied Research and Educational

Improvement, University of Minnesota.

Bloomberg, Laura, Deb Ingram, and Pat Seppanen. 1996. Minnesota's Family Services Collaboratives: A Summary of Outcome Evaluation Plans and Progress Reports. Minneapo- lis, MN: Center for Applied Research and Educational Im-

provement, University of Minnesota. Bolman, Lee, and Terrence Deal. 1997. Reframing Organiza-

tions. San Francisco: Jossey-Bass. Buntin, John. 1999. Assertive Policing, Plummeting Crime: The

NYPD Takes on Crime in New York City. Cambridge, MA: Harvard Kennedy School of Government.

Center for the Study of Social Policy. 1995. Trading Outcome Accountability for Fund Flexibility, Draft. Washington, DC: Center for the Study of Social Policy.

. 1996a. Systems Change at the Neighborhood Level.

Washington, DC: Center for the Study of Social Policy.

604 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

. 1996b. Toward New Forms of Local Governance. A Progress Report from the Field. Washington, DC: Center for the Study of Social Policy.

. 1998. Creating a Community Agenda: How Governance Partnerships Can Improve Results for Children, Youth, and Families. Washington, DC: Center for the Study of Social Policy.

Downs, Anthony. 1967. Inside Bureaucracy. New York: Little, Brown.

Feldman, Martha, and Anne Khademian. 2000. Managing for Inclusion: Balancing Control and Participation. International Journal of Public Management 3(2): 149-68.

Friedman, Mark. 1996. A Strategy Map for Results-Based Bud- geting. Moving from Theory to Practice. Washington, DC: The Finance Project.

Gardner, Sid. 1994. Refornl Options for the Intergovernmental Funding System. Decategorization Policy Issues. Washing- ton, DC: The Finance Project.

Georgia Family Connection. 1996. Aiming for Results: Stronger Families and Healthier Children in Georgia. A Report About the Family Connection. Atlanta: Georgia Family Connection.

Georgia Policy Council. 1994. A Framework for Improving Re- sults. Atlanta: Georgia Policy Council.

. 1996a. Aiming for Results. A Guide to Georgia's Bench- marks for Children and Families. Atlanta: Georgia Policy Council.

. 1996b. On Behalf of Our Children. Atlanta: Georgia Policy Council.

Gormley, William, and David Weimer. 1999. Organizational Report Cards. Cambridge, MA: Harvard University Press.

Hatry, Harry. 1999. Performance Measurement. Getting Results. Washington, DC: Urban Institute.

Ingraham, Patricia, and Amy Kneedler Donohue. 2000. Dissect- ing the Black Box Revisited: Characterizing Government Management Capacity. In Governance and Performance: New Perspectives, edited by Carolyn Heinrich and Laurence Lynn, 292-318. Washington, DC: Georgetown University Press.

Ingraham, Patricia, and Donald Moynihan. 2001. Beyond Mea- surement: Managing for Results in State Government. In Quicker, Better, Cheaper? Managing Performance in Ameri- can Government, edited by D.W. Forsythe, 309-33. Albany, NY: Rockefeller Institute Press.

Jewiss, Jennifer, and Susan Hasazi. 1999. Advancing Commu- nity Well-Being: A Developmental Perspective of Two Com- munity Partnerships in Vermont. http://www.ahs.state.vt.us/ pdffiles/9909AdvancingCommunityWellBeing.pdf.

Kagan, Sharon Lynn. 1993. Integrating Services for Children and Families: Understanding the Past to Shape the Future. New Haven, CT: Yale University Press.

Kagan, Sharon Lynn, Stacie Goffin, Sarit Golub, and Eliza Pritchard. 1995. Toward Systemic Reform: Service Integra- tion for Young Children and Their Families. Falls Church, VA: National Center for Service Integration.

Kaplan, Robert, and David Norton. 1992. The Balanced Scorecard-Measures that Drive Performance. Harvard Busi-

Kearns, Kevin. 1996. Managing for Accountability. San Fran- cisco: Jossey-Bass.

Kettl, Donald. 2000. The Global Public Management Revolu- tion. A Report on the Transformation of Governance. Wash- ington, DC: Brookings Institution.

Kimmich, Madeline, Mary Coacher, Binnie LeHew, and Mark Robbins. 1995. Iowa Decategorization and Statewide Child Welfare Reform.: An Outcome Evaluation. Des Moines, IA: Iowa Department of Human Services.

Knitzer, Jane. 1997. Service Integration for Children and Fami- lies: Lessons and Questions. In Integrated Services for Chil- dren and Families. Opportunities for Psychological Practice, edited by Robert J. Illback, Carolyn T. Cobb, and Herbert M. Joseph, 3-21. Washington, DC: American Psychological As- sociation.

Knitzer, Jane, and Stephen Page. 1996. Map and Track. State Initiatives for Young Children and Families. New York: Na- tional Center for Children in Poverty, Columbia University School of Public Health.

Liner, Blaine, Harry Hatry, Elisa Vinson, Ryan Allen, Pat Dusenbury, Scott Bryant, and Ron Snell. 2001. Making Re- sults-Based State Government Work. Washington, DC: Ur- ban Institute.

Milward, H. Brinton, and Keith Provan. 1998. Principles for Controlling Agents: The Political Economy of Network Struc- ture. Journal of Public Administration Research and Theory 8(2): 203-21.

. 2000. Governing the Hollow State. Journal of Public Administration Research and Theory 10(2): 359-79.

Moore, Mark. 1995. Creating Public Value. Cambridge, MA: Harvard University Press.

Morrill, William. 1996. Implications for the Future of Service Delivery System Reform. In Evaluating Initiatives to Inte- grate Human Services, New Directions for Evaluation 69, edited by Jules Marquart and Ellen Konrad, 85-95. San Fran- cisco: Jossey-Bass.

Moynihan, Donald. 2001. The State of the States in Managing for Results. A Report of the Government Performance Project. Syracuse, NY: Alan K. Campbell Public Affairs Institute.

Murphey, David. 1999. Presenting Community-Level Data in an "Outcomes and Indicators" Framework: Lessons from Vermont's Experience. Public Administration Review 59(1): 76-82.

O'Looney, John. 1996. Redesigning the Work of Human Ser- vices. Westport, CT: Quorum.

Osborne, David, and Peter Plastrik. 1997. Banishing Bureau- cracy. Reading, MA: Addison-Wesley.

O'Toole, Laurence. 1997. Taking Networks Seriously. Public Administration Review 57(1): 45-52.

Peters, B. Guy. 1996. The Future of Governing. Four Emer-ging Models. Lawrence: University Press of Kansas.

Potapchuk, William. 1997. Managing the State-Local Negotia- tions on "Vision to Scale" in the State of Maryland. Balti- more, MD: Governor's Office of Children, Youth, and Fami- lies.

ness Review 70(1): 71-79.

Measuring Accountability for Results in Interagency Collaboratives 605

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions

Radin, Beryl. 1998. The Government Performance and Results Act (GPRA): Hydra-headed Monster or Flexible Management Tool? Public Administration Review 58(4): 307-16.

Radin, Beryl, and Barbara Romzek. 1996. Accountability Ex- pectations in an Intergovernmental Arena: The National Ru- ral Development Partnership. Publius 26(2): 59-81.

Ragan, Mark. 2002. Service Integration in Colorado. Report for the Casey Strategic Consulting Group. Albany, NY: Rockefeller Institute of Government.

Reich, Robert. 1990. Policy Making in a Democracy. In The Power of Public Ideas, edited by Robert Reich, 123-56. Cam- bridge, MA: Harvard University Press.

Roberts, Nancy. 1997. Public Deliberation: An Alternative Ap- proach to Crafting Policy and Setting Direction. Public Ad- ministration Review 57(2): 124-32.

Romzek, Barbara. 1996. Enhancing Accountability. In Hand- book of Public Administration, edited by James L. Perry, 97- 114. San Francisco: Jossey-Bass.

Romzek, Barbara, and Melvin Dubnick. 1987. Accountability in the Public Sector: Lessons from the Challenger Tragedy. Public Administration Review 47(3): 227-38.

Rozansky, Phyllis. 1997a. Missourians Working Together: A

Progress Report. St. Louis, MO: Family Investment Trust. . 1997b. Navigating the River of Change: The Course of

Missouri's Community Partnerships. St. Louis, MO: Missouri

Family Investment Trust.

Scharpf, Fritz. 1993. Coordination in Hierarchies and Networks. In Games in Hierarchies and Networks, edited by Fritz

Scharpf, 125-65. Boulder, CO: Westview Press. Schorr, Lisbeth. 1988. Within Our Reach. New York: Anchor/

Doubleday. . 1997. Common Purpose. New York: Anchor/Doubleday.

Senge, Peter. 1990. The Fifth Discipline. New York: Currency/ Doubleday.

Smith, Steven Rathgeb, and Michael Lipsky. 1993. Nonprofits for Hire. Cambridge, MA: Harvard University Press.

Swanson Gribskov, Laurie. 1995. Policy Implementation and

Organizational Response: The Case of Title XX in Oregon- Funding Programs for At Risk Youth and Families. PhD diss., University of Oregon.

Waldfogel, Jane. 1997. The New Wave of Service Integration. Social Service Review 71(3): 463-84.

Weber, Edward. 1999. The Question of Accountability in His- torical Perspective: From Jackson to Contemporary Grassroots

Ecosystem Management. Administration and Society 31(4): 451-94.

Weiner, Myron. 1990. Human Services Management: Analysis and Applications. Belmont, CA: Wadsworth Publishing.

Wondolleck, Julia, and Steven Yaffee. 2000. Making Collabora- tion Work. Washington, DC: Island Press.

Yessian, Mark. 1995. Learning from Experience: Integrating Human Services. Public Welfare 53(3): 34-42.

606 Public Administration Review * September/October 2004, Vol. 64, No. 5

This content downloaded from 185.2.32.49 on Mon, 16 Jun 2014 08:32:58 AMAll use subject to JSTOR Terms and Conditions


Recommended