+ All Categories
Home > Documents > Chapter 11

Chapter 11

Date post: 03-Jan-2016
Category:
Upload: brielle-mendoza
View: 28 times
Download: 0 times
Share this document with a friend
Description:
Chapter 11. Evaluation research. Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement. Instead, evaluation research is social research that is - PowerPoint PPT Presentation
Popular Tags:
34
1 Chapter 11 Evaluation research
Transcript
Page 1: Chapter 11

1

Chapter 11

Evaluation research

Page 2: Chapter 11

2

•Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement.

• Instead, evaluation research is social research that is conducted for a distinctive purpose: to investigate social programs (e.g., substance abuse treatment programs, welfare programs, criminal justice programs, or employment and training programs).

Page 3: Chapter 11

3

•For each project, an evaluation researcher must select a research design and method of data collection that are useful for answering the particular research questions posed and appropriate for the particular program investigated.

•The development of evaluation research as a major enterprise followed on the heels of the expansion of the federal government during the Great Depression and World War II.

Page 4: Chapter 11

4

•Large Depression-era government outlays for social programstimulated interest in monitoring program output, and the military effort in World War II led to some of the necessary review and contracting procedures for sponsoring evaluation research.

•In the 1960s, criminal justice researchers began to use experiments to test the value of different policies (Orr 1999:24).

Page 5: Chapter 11

5

•In the early 1980s, after this period of rapid growth, many evaluation research firms closed in tandem with the decline of many Great Society programs.

•However, the demand for evaluation research continues, due, in part, to government requirements.

• The growth of evaluation research is also reflected in thesocial science community. The American EvaluationAssociation was founded in 1986 as a professional organization for evaluation researchers (merging two previous associations) and the publisher of an evaluation research journal.

Page 6: Chapter 11

6

•The process of evaluation research can be viewed as a simple systems model.

•First, clients, customers, students, or some other persons or units—cases—enter the program as inputsinputs. (You’ll notice that this model treats programs like machines, with peoplefunctioning as raw materials to be processed.)

•Resources and staff required by a program are also program inputs.

Page 7: Chapter 11

7

Insert exhibit 11.1

Page 8: Chapter 11

8

•Next some service or treatment is provided to the cases. This may be attendance in a class, assistance with a health problem, residence in new housing, or receipt of special cash benefits.

•The program processprogram process may be simple or complicated,short or long, but it is designed to have some impact on the cases.

Page 9: Chapter 11

9

•The direct product of the program’s service delivery process is its output.output.

•Program outputs may include clients served, case managers trained, food parcels delivered, or arrests made.

•The program outputs may be desirable in themselves, but they primarily serve to indicate that the program is operating.

Page 10: Chapter 11

10

•Program outcomesoutcomes indicate the impact of the program on the cases that have been processed.

•Outcomes can range from improved test scores or higher rates of job retention to fewer criminal offensesand lower rates of poverty.

•Any social program is likely to have multiple outcomes, some intended and some unintended, some positive and others that are viewed as negative.

Page 11: Chapter 11

11

•Variation in both outputs and outcomes, in turn, influence the inputs to the program through a feedbackfeedback process.

•If not enough clients are being served, recruitment of new clients may increase.

•If too many negative side effects result from a trial medication, the trials may be limited or terminated.

•If a program does not appear to lead to improved outcomes, clients may go elsewhere.

Page 12: Chapter 11

12

•The evaluation process as a whole, and feedback inparticular, can be understood only in relation to the interests and perspective of program stakeholders.

•StakeholdersStakeholders are those individuals and groups who have some basis of concern with the program.

•They might be clients, staff, managers, funders, or the public.

• Who the program stakeholders are and what role they play in the program evaluation will have tremendous consequences for the research.

Page 13: Chapter 11

13

Alternatives in evaluation designsAlternatives in evaluation designs

Evaluation research tries to learn if, and how, real-world programs produce results. But that simple statement covers a number of important alternatives in research design, including the following:

•Black box or program theoryBlack box or program theory—Do we care how the program gets results?

•Researcher or stakeholders orientationResearcher or stakeholders orientation—Whose goals matter most?

•Quantitative or qualitative methodsQuantitative or qualitative methods—Which methods provide the best answers?

Simple or complex outcomesSimple or complex outcomes—How complicated should the findings be?

Page 14: Chapter 11

14

Black box or program theoryBlack box or program theory

•Most evaluation research tries to determine whether a program has the intended effect.

•If the effect occurred, the program “worked”; if the effect didn’t occur, then, some would say, the program should be abandoned or redesigned.

•In this simple approach, the process by which a program produces outcomes is often treated as a “black box,” in which the “inside” of the program is unknown.

Page 15: Chapter 11

15

•The focus of such research is whether cases have changed as a result of their exposure to the program, between the time they entered as inputs and when they exited as outputs (Chen, 1990).

•The assumption is that program evaluation requires only the test of a simple input/output model, like that in Exhibit 11.1.

•There may be no attempt to “open the black box” of the program process.

Page 16: Chapter 11

16

•If an investigation of program process had been conducted, though, a program theoryprogram theory could have been developed.

•A program theory describes what has been learned about how the program has its effect.

•When a researcher has sufficient knowledge before the investigation begins, outlining a program theory can help to guide the investigation of program process in the most productive directions.

•This is termed a theory-driven evaluation.theory-driven evaluation.

Page 17: Chapter 11

17

•Program theory can be either descriptive or prescriptive (Chen, 1990).

•Descriptive theoryDescriptive theory specifies impacts that are generated and how this occurs.

•It suggests a causal mechanism, including intervening factors, and the necessary context for the effects.

•Descriptive theories are generally empirically based.

Page 18: Chapter 11

18

•Prescriptive theoryPrescriptive theory specifies what ought to be done by the program, and is not actually tested.

•Prescriptive theory specifies how to design or implement the treatment, what outcomes should be expected, and how performance should be judged.

•Comparison of the program’s descriptive and prescriptive theories can help to identify implementation difficulties and incorrect understandings that can be fixed (Patton, 2002:162–164).

Page 19: Chapter 11

19

Researcher or stakeholder orientationResearcher or stakeholder orientation

•Stakeholder approachesStakeholder approaches encourage researchers to be responsive to program stakeholders.

•Issues for study are to be based on the views of people involved with the program and reports are to be made to program participants (Stake, 1975).

•The stakeholders and others who may be drawn into the evaluation are welcomed as equal partners in every aspect of design, implementation, interpretation, and resulting action of an evaluation—that is, they are accorded a full measure of political parity and control....determining what questions are to be asked and what information is to be collected on the basis of stakeholder inputs.

Page 20: Chapter 11

20

•Social science approachesSocial science approaches, in contrast, emphasize researcher expertise autonomy in order to develop the most trustworthy, unbiased program evaluation.

•A program theory is derived from information on how the program operates and current social science theory, not from the views of stakeholders.

Page 21: Chapter 11

21

•Integrative approachesIntegrative approaches attempt to cover issues of concern to both stakeholders, and evaluators.

•The emphasis given to either stakeholder or scientific concerns varies with the specific circumstances.

•Integrative approaches seek to balance responsiveness to stakeholders with being objectivity and scientific validity.

Page 22: Chapter 11

22

•Quantitative and qualitative approaches to evaluation each have their strengths and appropriate uses.

• Quantitative research, with its clear percentages and numerical scores, allows quick comparisons over time and categories, and thus is typically used in attempts to identify the effects of a social program.

•Qualitative methods can add depth, detail, and nuance; they can clarify the meaning of survey responses, and reveal more complex emotions and judgments people may have.

Page 23: Chapter 11

23

Simple or complex outcomesSimple or complex outcomes

•Few programs have only one outcome.

•Sometimes a single policy outcome is sought, but is found not to be sufficient, either methodologically or substantively.

•In spite of the difficulties, most evaluation researchers attempt to measure multiple outcomes.

• Collection of multiple outcomes gives a better picture of program impact.

Page 24: Chapter 11

24

Focus of evaluation studiesFocus of evaluation studies

•Evaluation projects can focus on a variety of different questions related to social programs and their impact.

• Which question is asked will determine what research methods are used.

What is the level of need for the program? Can the program be evaluated? How does the program operate? What is the program’s impact?•How efficient is the program?

Page 25: Chapter 11

25

Needs assessmentNeeds assessment

•A needs assessmentneeds assessment attempts, with systematic, credible evidence, to evaluate what needs exist in a population.

•Need may be assessed by social indicators such as the poverty rate or the level of home ownership, interviews with local experts such as school board members or team captains, surveys of populations potentially in need, or focus groups with community residents.

• In general, it is a good idea to use multiple indicators of need.

•There is no absolute definition of need in most projects.

Page 26: Chapter 11

26

Evaluability assessmentEvaluability assessment

•Some type of study is always possible, but to specifically identify the effects of a program may not be possible within the available time and resources.

•So researchers may conduct an evaluability assessmentevaluability assessment to learn this in advance, rather than expend time and effort on a fruitless project.

•Because they are preliminary studies to “check things out,” evaluability assessments often rely on qualitative methods.

• The knowledge gained can be used to refine evaluation plans.

Page 27: Chapter 11

27

Process evaluationProcess evaluation

•Process evaluation:Process evaluation: Evaluation research that investigates the process of service delivery.

•Process evaluation is more important when more complex programs are evaluated.

•Many social programs comprise multiple elements and are delivered over an extended period of time, often by different providers in different areas.

Page 28: Chapter 11

28

Formative evaluationFormative evaluation

•Formative evaluationFormative evaluation: Process evaluation that is used to shape and refine program operations.

•Evaluation may then lead to changes in recruitment procedures, program delivery, or measurement tools.

Page 29: Chapter 11

29

Impact analysisImpact analysis

•The core questions of evaluation research are: Did the program work? Did it have the intended result? This kind of research is variously called impact analysis, impact evaluation, or summative evaluation.

•Impact analysisImpact analysis (also called summative evaluationsummative evaluation) compares what happened after a program was implemented with what would have happened had there been no program at all.

Page 30: Chapter 11

30

Efficiency analysisEfficiency analysis

•Finally, a program may be evaluated for how efficientlyit provides its benefit; typically, financial measures are used.

•Cost-benefit analysisCost-benefit analysis: a type of evaluation that identifies the specific program costs and the procedures for estimating the economic value of specific program benefits.

•Cost-effectiveness analysisCost-effectiveness analysis: a type of evaluation researchthat focuses attention directly on the program’s outcomes rather than on the economic value of those outcomes.

Page 31: Chapter 11

31

Ethics in evaluationEthics in evaluation

•Evaluation research can make a difference in people’s lives while the research is being conducted, as well as after the results are reported.

•Job opportunities, welfare requirements, housing options,treatment for substance abuse, and training programs are each potentially important benefits, and an evaluation research project can change both the type and availability of such benefits.

•This direct impact on research participants and, potentially, their families, heightens the attention that evaluation researchers have to give to human subjects concerns.

Page 32: Chapter 11

32

There are many specific ethical challenges in evaluation research:

How can confidentiality be preserved when the data are owned by a government agency or are subject to discovery in a legal proceeding?Who decides what burden an evaluation project can impose upon participants?•Can a research decision legitimately be shaped by political considerations?•Must findings be shared with all stakeholders, or only with policymakers?Will a randomized experiment yield more defensible evidence than the alternatives?•Will the results actually be used?

Page 33: Chapter 11

33

•Hopes for evaluation research are high: Society could benefit from the development of programs that work well, accomplish their policy goals, and that serve people who genuinely need them.

•Evaluation research can provide social scientists with rare opportunities to study complex social process, withreal consequences, and to contribute to the public good.

•Although they may face unusual constraints on their research designs, most evaluation projects can result in high-quality analysis and publications in reputable social science journals.

Page 34: Chapter 11

34


Recommended