+ All Categories
Home > Education > June 20 2010 bsi christie

June 20 2010 bsi christie

Date post: 06-May-2015
Category:
Upload: harrindl
View: 1,227 times
Download: 0 times
Share this document with a friend
Popular Tags:
92
June 20, 2010 Christina A. Christie, Ph.D. University of California, Los Angeles Michael Harnar, MA Claremont Graduate University
Transcript
Page 1: June 20 2010 bsi christie

June 20, 2010

Christina A. Christie, Ph.D.

University of California, Los Angeles

Michael Harnar, MA

Claremont Graduate University

Page 2: June 20 2010 bsi christie

Understand and consider the various ways evaluation can influence decisions and activities

Examine the use of logic models in the evaluation process

Identify strategies for developing logic models and promoting evaluation use

Page 3: June 20 2010 bsi christie

• Evaluation refers to the process of determining the merit, worth, or value of something, or the product of that process. (Scriven, 1991, p. 139)

• Program evaluation is the use of social research methods to systematically investigate the effectiveness of social intervention programs. (Rossi, Lipsey, Freeman, 2004, p. 28)

• Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming. (Patton, 1997, p. 23)

Page 4: June 20 2010 bsi christie

Intended for:Program decision

makingRendering judgments

Stakeholders set the agenda

Primary audience for the study:Program staff

Findings are:Program & context

specificShared on an ongoing

basis

• Intended for:– Adding to the existing

knowledge base• Researcher sets the

agenda• Primary audience for

the study:– Scientific community

& the public• Findings are:

– Intended to be broadly applicable or generalizable

– Shared at the end of the study

Research

Page 5: June 20 2010 bsi christie

Evaluation is carried out for different reasons. The unique purpose of each evaluation impacts its design. Some of the more general reasons for evaluation include:

To gain insight about a program and its operations – To learn how a program has been operating and how it is evolving, and to establish what works and what doesn’t

To improve practice – to modify or adapt practice to enhance the success of activities

To assess effects – to see how well a program is meeting it’s objectives and goals, how it benefits the community, and what evidence there is for its effectiveness

To build capacity - increase funding, enhance skills, strengthen accountability

Page 6: June 20 2010 bsi christie

Some more specific reasons for conducting evaluation:

• A program evaluation can find out “what works” and “what does not work.”

• A program evaluation can showcase the effectiveness of a program to the campus community and to funders.

• A program evaluation can improve faculty practice, administrators’ leadership, and promote organizational development .

• A program evaluation can increase a program or colleges’ capacity to conduct a critical self assessment and plan for the future.

• A program evaluation can build knowledge for more effective teaching and learning.

Page 7: June 20 2010 bsi christie

The purpose and design of a program evaluation can answer different questions and provide a broad range of information. Some examples include:

• Learning whether proposed program materials are suitable• Learning whether program plans are feasible• Providing an early warning system for problem identification• Learning whether programs are producing the desired

results• Learning whether programs have unexpected benefits or

problems• Enabling managers to improve program processes• Monitoring progress toward the program’s goals• Producing data on which to base future programs• Demonstrating the effectiveness of the program• Identifying the most effective parts of the program for

refinement• Gathering valuable information that can be shared

Page 8: June 20 2010 bsi christie

Evaluation is… Evaluation is not…

Done with you Done TO you

Able to provide rich information

Simply program monitoring

Intended to be used Intended to sit on a shelf or to check a box

For the program stakeholders

For the evaluator or only for management

Systematic Haphazard

FUN! Scary (Really it isn’t!– you’ll see )

Evaluation, like anything else, can be done well and produce benefits or it can be done poorly adding little

value, sometimes having negative impact. The following table contrasts characteristics of effective and

ineffective evaluation.

Page 9: June 20 2010 bsi christie

There are many ways to approach an evaluation of a programAnd so there are “theories” (also called “models” or “approaches” or

“frameworks” ) about how best to conduct an evaluation study

There are many evaluation theories and there is no one “right” theory or way of doing an evaluation

Page 10: June 20 2010 bsi christie

An evaluation theory offer evaluators a conceptual framework that helps to organize the procedures

Theories are distinguished by what is believed to be the primary purpose of evaluation

Page 11: June 20 2010 bsi christie

• The theoretical framework that guides this module focuses on practices that increase the likelihood that the information generated from the evaluation is used and on those who will use the information

• Specifically, this module will highlight the Centers for Disease Control (CDC) Evaluation Framework. The CDC framework merges several approaches designed to promote use and includes steps that are common to most evaluation models

Page 12: June 20 2010 bsi christie

www.cdc.gov/eval

Page 13: June 20 2010 bsi christie

The six steps of the CDC Evaluation Model are representative of components of most program evaluation models.

There are six basic steps to program evaluation, all of which are related.

The first steps provide a foundation for the later ones.

Page 14: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & data collectionAnalyze & draw evaluative conclusionsCommunicate progress & findings

Page 15: June 20 2010 bsi christie

Most program efforts involve partners, all of whom are stakeholders in the effort.

Stakeholders must be involved in the evaluation; without their involvement, an evaluation may lack important information or be less useful.

Page 16: June 20 2010 bsi christie

Engagement of Stakeholders is essential to successful evaluation.

Engagement assures that evaluation is useful and credible. It clarifies roles and responsibilities and avoids real or perceived conflicts of interest.

Successful engagement further enhances cultural competence and increases protection for human subjects who may be involved.

Page 17: June 20 2010 bsi christie

Stakeholders… Bring a variety of valuable & informative perspectives Add credibility to the evaluation May have resources to help May be critical to implementing or advocating for action

based on findings

Also engaging stakeholders can help… Everyone learn more about evaluation and the program

Build trust and understanding among program constituents

Page 18: June 20 2010 bsi christie

There are strategies common to successful stakeholder engagement. These include but are not limited to:

Consulting program insiders (e.g., leaders, staff, participants, community members, and program funding sources) and outsiders (e.g., skeptics).

Taking special effort to promote the inclusion of less powerful groups or individuals.

Effective engagement requires that evaluators foster input, participation, and power-sharing among those who have an investment in the evaluation and the findings.

Page 19: June 20 2010 bsi christie

• People who have a vested interest in evaluation findings can be categorized into three levels:

– Primary stakeholders are direct users of evaluation finding and may use them to alter a program’s course

– Secondary stakeholders will likely be affected be affected by programmatic changes that result from evaluation.

– Tertiary stakeholders may have interest in the evaluation findings, but will not be directly impacted

Page 20: June 20 2010 bsi christie

Who are typical stakeholders in program evaluation?

• Those involved in program operations (e.g., sponsors, collaborators, partners, funding officials, administrators, managers, and staff).

• Those served or affected by the program (e.g., students, faculty, neighborhood organizations, other academic institutions, elected officials, advocacy groups, professional associations, skeptics).

• Those who will be the primary consumers of information produced by the evaluation

Page 21: June 20 2010 bsi christie

Who is…Affected by the program?Involved in program operations?Intended users of evaluation findings?

Who do we need to…Enhance credibility?Implement program changes?Advocate for changes?Fund, authorize, or expand the program?

To identify stakeholders, we should ask the following questions:

Page 22: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & data collectionAnalyze & draw evaluative conclusionsCommunicate progress & findings

Page 23: June 20 2010 bsi christie

The second step in prevention program evaluation is to Describe the Program in clear measurable terms. A strong prevention program description is important because it can:

• Establish clarity & consensus about program activities & intended effects

• Identify “holes” or problems in the program early on

• Document agreement about how smaller components of the program fit into the larger picture

• Articulate & document common vision

Page 24: June 20 2010 bsi christie

The program description details the mission and objectives of the program.

Descriptions should help evaluators understand the program goals and strategies.

Stakeholders should review and agree with the program description.

Program descriptions will vary for each evaluation.

Page 25: June 20 2010 bsi christie

An effective Program Description should include a strong statement of need that describes the problem that the program addresses. This includes:

– Expected effects are what the program must do to be successful.

– Program activities are what the program does to effect change.

– Resources include the time, talent, technology, equipment, information, money, and other assets available to conduct program activities.

– The program’s stage of development reflects its maturity.

– The context should describe the setting within which the program operates.

– Some programs have used a logic model as a planning tool.

Page 26: June 20 2010 bsi christie

• Why does your program exist? What is the need?

• Who is engaged in and affected by the program?

• What do you want to change as a result of your efforts?

• What activities do you perform as part of your program?

• Who develops & performs these activities?

• Who funds these activities?

• What is the context in which your program operates?

Program Theories can help with this…

To develop a program description ask the following questions:

Page 27: June 20 2010 bsi christie

Program Theory

A systematic configuration of stakeholders’ prescriptive assumptions (what actions must be taken) and descriptive assumptions (what causal processes are expected to happen) underlying programs, whether explicit or implicit assumptions.

- Chen, p. 136, Evaluation Roots, 2004

Page 28: June 20 2010 bsi christie

Why Develop a Program Theory

Theory-based evaluations help to get at the why and how of program success or failure.

- Weiss, p. 158, Evaluation Roots, 2004

Why do people expect the program to work?

How do they expect to accomplish its ends?

What do they expect to happen and through what sequence of microsteps?

Page 29: June 20 2010 bsi christie

Theory of Change

Identifies the causal processes generated by a program and through which a given type of social change is expected to occur

Goals and outcomes (needs and measurable results)

Determinants (leverage to meet a need)

Intervention or treatment (agent of change)

Page 30: June 20 2010 bsi christie

Theory of Action

Maps out a systematic plan or specific pathway in a theory of change to deal with the causal processes that are expected to happen to attain program goals

Arranging staff

Resources

Settings

Support organizations

Page 31: June 20 2010 bsi christie

Logic Model

Systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan, and the changes or results you hope to achieve.

- The W.K.Kellogg Foundation Logic Model Development Guide

Graphical depiction of a program theory

Page 32: June 20 2010 bsi christie

Definition of a Logic Model

The program logic model is defined as a picture of how your organization does its work – the theory and assumptions underlying the program. A program logic model links outcomes (both short- and long-term) with program activities/processes and the theoretical assumptions/principles of the program.

- The W.K.Kellogg Foundation Logic Model Development Guide

Page 33: June 20 2010 bsi christie

Logic Models are common tools used to describe programs.

Logic models provide a “Road Map” of a program. – Drawing a picture of expected program

achievements and how the achievements will be realized.

– Creating a visual of relationships hypothesized to exist between the program activities and the intended program effects

• Logic Models describe expectations/intentions of a program.

Page 34: June 20 2010 bsi christie

Why Use a Logic Model

Identify what a program does and intends to do

Indicate if there is a disjuncture between what a program intends to do and what a program is actually doing

Point out strengths and weaknesses in a program

Allow stakeholders to tryout possible scenarios

The visual representation allows flexibility and provides the opportunity to adjust approaches and change course as plans develop

Page 35: June 20 2010 bsi christie

Components of a Logic Model

Two components:

Your Planned Work: Resources (inputs) and Activities

Your Intended Results: Outputs, Outcomes, and Impact

Page 36: June 20 2010 bsi christie

Resources/Inputs

Resources needed to achieve

program’s objectives

Activities

What the program does with resources to meet

objectives

Outputs

Direct products of

program activities

Outcomes

Changes that result from the program’s activities

and outputs

External Factors/Context: Description of environment in which program takes place

Assumptions: The underlying assumptions that influence the program’s design, implementation or goals

Intermediate

Longterm

Shortterm

Page 37: June 20 2010 bsi christie

The next slide is a logic model of an education program designed to prevent HIV infection in Native Americans.

While this program does not explicitly address mental health outcomes, the evaluation approach is consistent with mental health prevention program evaluation.

Page 38: June 20 2010 bsi christie
Page 39: June 20 2010 bsi christie

To create a logic model & graphic use the following steps:

• Move from right to left answering the following questions: – What do I want to accomplish with this program? – What changes do I expect to see from this program?– In whom or what will these changes occur?– What do we do to affect change?

• Then revise, refine, and more precisely describe and visually depict the relationships among components

• Connect components with arrows…to show flow• Describe the context in which your program

resides & operates• Revise and refine again until its “just right” –

remember this is an iterative process

Page 40: June 20 2010 bsi christie

Activities OutputsInputs Short-term

Outcomes

Intermediate Long-term

Assumptions External Factors/Context

Page 41: June 20 2010 bsi christie

Program Activities Short-termOutcomes

Intermediate Long-term

PSA“Tough Classes”

Target Audience: 8th - 10th grade students nationwideAssumptions: (1) students have access to computers and the internet (to view PSA and related materials), and (2) attitudes are the primary determinants of course-taking behaviors

knowledge of classes that prepare

for college

students view taking tough classes as cool, rebellious

taking of college prep courses

college eligibility

college application and acceptance preparedness

knowledge of resources related to

college access

utilization of resources (including campaign website)

college achievement

Page 42: June 20 2010 bsi christie

Sample Logic Model Frameworksource: http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

Page 43: June 20 2010 bsi christie

Example Logic Model 1

Page 44: June 20 2010 bsi christie

Example Logic Model 2

Page 45: June 20 2010 bsi christie

Using a Logic Model to Inform the Evaluation

Provides a framework for the evaluation

Helps prioritize activities

Keeps focus on stakeholders’ questions

- The W.K.Kellogg Foundation Logic Model Development Guide

Page 46: June 20 2010 bsi christie

What Type of Question Will be Answered?Formative Evaluation

(Improve)Summative Evaluation

(Prove)

Periodic reports

Share quicklyDemonstrate results to stakeholders

Monitor Progress

Mid-course corrections

Intermediate outcomes and impact

Determine value and worth based on results

Helps to bring suggestions for improvement to the attention of the staff

Describes quality and effectiveness

by documenting impact

Page 47: June 20 2010 bsi christie

What is Going to be Evaluated?

Page 48: June 20 2010 bsi christie

Focus Area Indicators How to Evaluate

Influential Factors

Resources

Activities

Outputs

Outcomes & Impacts

Page 49: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & data collectionAnalyze & draw evaluative conclusionsCommunicate progress & findings

Page 50: June 20 2010 bsi christie

There are many reasons for doing an evaluation. Sometimes the reason is explicit & transparent. Sometimes it isn’t.

It is very important that the evaluation's purpose is clear.

This step in the evaluation process addresses the question: • Why is this evaluation being conducted?

For example:Will this evaluation be used to make improvements to the

prevention program? Or will the evaluation be used to make a decision about the

program’s expansion or elimination? Or will it be used for both?

Engaging stakeholders in focusing the prevention program evaluation purpose and questions to be addressed as part of the study will help to ensure your study will be relevant, meaningful, and culturally responsive.

Page 51: June 20 2010 bsi christie

There are different types of evaluation questions:

Formative and Summative questions refer to the purpose of the study

Process and Outcome questions refer to the phase of the program being studied

Page 52: June 20 2010 bsi christie

Formative questions are intended to generate information for program improvement. They are designed to assist in program development.

Summative questions are intended to generate information for decisions about program adoption, continuation, or expansion. They are designed to assist with decisions about whether to continue or end a program, extend it or cut it back.

Page 53: June 20 2010 bsi christie

OUTCOME Measures effects, results,

impact on participantsThese can be intended or

unintended

Asks the questions: What was the impact? and Compared to what?

How does this look in the short-term and long-term?

PROCESS Examines what is going on with

this program Describes what the program is

doing, by whom and for whom Asks the questions: What

produced the outcomes and Why? How did/does it work? How was it implemented?

Process questions ask about the operation of the program. Outcome questions ask about how the program is affecting recipients.

Page 54: June 20 2010 bsi christie

As stated earlier, effective evaluation is respectful of, involves, values and is valuable to stakeholders. In this step, it is necessary to define a purpose for the evaluation and to ensure that the questions being addressed by the study are important to stakeholders.

What types of things do stakeholders typically want to know about?

– Whether program is implemented as intended?

– How is this program effecting the recipients?

– Are the intended changes occurring?

– If the “bang is worth the buck”?

Page 55: June 20 2010 bsi christie

An evaluation requires the right number of questions. Too few makes an evaluation vulnerable to being weak, not worth it. Too many questions can overwhelm evaluation resources and confuse the final results.

An evaluation requires one clear purpose statement, and a set of 3-12 clearly worded evaluation questions

FOR EXAMPLE: The purpose of this evaluation is to explore the reasons why participants successfully complete a Summer Bridge Program.

How many & for what reasons do participants attend a Summer Bridge Program?

What factors contribute to participants completing the Summer Bridge Program?

What factors are in place in participants’ lives at Program completion that lead to continued success?

Page 56: June 20 2010 bsi christie

• Encourage open sharing of interests and potential questions.

• Sort through and prioritize questions with stakeholders.

• Try to clarify questions so that they are measureable and meaningful.

Address questions that stakeholders don’t value.

Immediately “write-off” questions that seem unanswerable.

shouldn’t:

When narrowing the list of potential evaluation questions to be addressed in a study, the evaluator…

Page 57: June 20 2010 bsi christie

Listen to stakeholders’ needs and interests.

Help stakeholders to articulate the questions to be answered.

Foster stakeholders’ “ownership” of the evaluation.

Monopolize the evaluation.

Decide on the evaluation questions without stakeholder input.

Treat this program like all other programs.

shouldn’t:

When narrowing the list of potential evaluation questions to be addressed in a study, the evaluator…

Page 58: June 20 2010 bsi christie

As evaluators and stakeholders gain experience they are able to ask more specific questions that better meet their needs. Some examples follow: – How would a sound answer to this question help the

program?

– What is the stage of development of this program?

– How intensive is this program? What is the “dosage”?

– Is this a “need to know” or “nice to answer” question?

– When do we need an answer to this question for it to be useful?

– What level of resources are available to do this evaluation? Will answers to this question be credible?

Page 59: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & data collectionAnalyze & draw evaluative conclusionsCommunicate progress & findings

Page 60: June 20 2010 bsi christie

The fourth step in program evaluation is to determine how to produce answers to the evaluation questions.

Two major steps to produce these answers:Decide on an evaluation designDecide on data collection methods

Evaluation questions should drive design and data collection methods - not the other way around.

Page 61: June 20 2010 bsi christie

Is random assignment used?

Randomized or true experiment

yes

Is there a control group or multiple measures?

no

Quasi-experiment Non-experiment

yes no

Obtained from the Research Methods Knowledge Base: http://www.socialresearchmethods.net/kb/destypes.php

Evaluation designs can be divided into 3 categories or types: Randomized or true experimental design, Quasi-Experimental design and non-experimental design. Decisions about whether or not to use a “control group” and how individual’s are assigned to intervention and control groups determines the category or type of evaluation design.

Page 62: June 20 2010 bsi christie

Experimental

When answering the question: Did my program (x) cause this specific impact on participants (y)?

Only possible when random assignment is feasible

Random assignment is often not possible in local level prevention program evaluation & thus this is not a commonly used design in this context

Non-experimentalWhen answering questions that are observational or ,

descriptive in nature. For example:

How is our program being implemented?

This is one of the most common forms of designs used in local level evaluation studies. However, it isn't a particularly good choice for addressing cause-effect questions that examine the relationship between program and its outcomes.

Quasi-experimental

• When answering the question: How does my program look when compared to…?

• These designs are used when an implementing an experimental design is not possible

Page 63: June 20 2010 bsi christie

Experimental designs randomly assign persons to either an intervention or a non-intervention group.

Random assignment helps to ensure that the two groups similar in their composition, with the exception that one group receives the intervention and the other receives either “business as usual” or no program at all.

When the intervention is complete, differences between the two groups in the outcome of interest are measured. If we find a difference, it was likely caused by the intervention.

Randomly assigning participants to either a Summer Bridge program or a general campus orientation program, to determine if participation in the Summer Bridge program caused an increase in retention and success is an example of study using an experimental design.

Page 64: June 20 2010 bsi christie

Quasi-experimental designs are similar to experimental designs, except that persons are not randomly assigned to a group.

This type of method is used when randomization is not feasible.

Studying whether after program participation, student engagement is significantly higher for those who participated in the program when compared to persons who did not participate in the program (and persons were not assigning randomly to the program) is an example of a quasi-experimental design.

Page 65: June 20 2010 bsi christie

Non-experimental designs tend to be descriptive and attempt to understand differences, similarities, and processes within a group.

Reporting the number of students who transfer to the UC and CSU is example of an application of a non-experimental design.

Page 66: June 20 2010 bsi christie

Experimental• Post test only with

control group• Pretest-posttest

with control group• Random

assignment to groups is required condition when implementing an experimental design

Non-experimental

• Case study• Post test only• Retrospective

pretest• Cross-sectional

surveys

Quasi-experimental

• Pretest-posttest

• Time series

• Regression discontinuity

Page 67: June 20 2010 bsi christie

• The choice of design has implications for what will be used as evidence. Evidence comprises the various forms of data that we use to support our evaluation claims.

• The design will determine how evidence will be gathered and what kind of claims can be made.

• The design also determines how data sources will be selected, what data collection instruments will be used, who will collect the data, and what data management systems will be needed.

Page 68: June 20 2010 bsi christie

The other major consideration in answering evaluation questions is data collection

There are many ways to collect data, for example:– Surveys– Interviews– Focus groups– Observations– Document review– Tests (assessments)

Page 69: June 20 2010 bsi christie

There are other important considerations for data collection that will impact the quality of data, use of evaluation resources & the timeliness of evaluation information. Some considerations include:

– Who will collect the data?

– To what extent do data collectors require training?

– Pilot testing instruments, checking reliability & validity

– When does the data need to be collected?

– Resources available?– Staff, money, technology

Page 70: June 20 2010 bsi christie

Use of existing data may reduce or eliminate the need to collect new data for evaluation.

Explore existing data sources, for example…– Administrative data (Cal Pass)– Campus data– Etc.

However, it is important not to tailor evaluation questions towards data just because it is readily available. Evaluation questions must drive data collection.

Page 71: June 20 2010 bsi christie

Document your data collection strategy. An example follows:

Evaluation Questions

Data Collection Method

Source Data Type

To what extent did the intended target audience receive the message?

Survey Random sample of households in county X

Quantitative

For those who didn’t receive messages, why?

Survey Random sample of households in county X

Qualitative

Page 72: June 20 2010 bsi christie

Care must be taken to design an evaluation that generates credible information that stakeholders perceive as culturally relevant, trustworthy & valuable

Evaluators should strive to collect information that will convey a well-rounded picture of the program & be seen as credible by the evaluation’s users

When stakeholders find evaluation data and designs to be credible, they are more likely to accept the findings & to act on its recommendations

Page 73: June 20 2010 bsi christie

Credibility can be improved by using multiple procedures (i.e., data collection strategies) and by involving stakeholders in determining what data should be gathered.

The following factors affect people’s perceptions of the credibility of your evaluation evidence:indicators, sources, quality, quantity, and logistics.

Page 74: June 20 2010 bsi christie

Indicators are aspects of the program that can be examined

to address the questions of the evaluation.

Examples of indicators that can be defined and tracked include the program’s capacity to deliver services, the participation rate, levels of client satisfaction, the efficiency of resource use, and the amount of intervention exposure.

Other measures of program effects, such as changes in participant behavior, community norms, policies or practices, and the settings or environment around the program, can also be tracked.

Page 75: June 20 2010 bsi christie

Sources of evidence are persons, documents, or observations.

More than one source might be used to gatherevidence. Use of multiple sources provides different

perspectives.

Page 76: June 20 2010 bsi christie

Quality refers to the correctness and integrity of the information.

Quality data are representative of what they intend to measure and are informative for their intended use.

Instrument design, data-collection procedures, training of data collectors, source selection, coding, data management, and routine error-checking all influence the quality of your data.

Page 77: June 20 2010 bsi christie

Quantity refers to the amount of evidence gathered. The amount of information needed should be estimated in advance.

All evidence collected should have a clear and anticipated use, with only minimal burden placed on respondents.

Another example of quantity would involve determining how many persons must provide information to adequately address the evaluation question. The burden on persons to provide information should always approach the minimum needed.

Page 78: June 20 2010 bsi christie

Logistics encompass the methods, timing, and physicalinfrastructure for gathering and handling evidence.

Each technique selected for gathering evidence must be suited to the source(s), analysis plan, and strategy for communicating findings.

Cultural issues should influence decisions about acceptable ways of asking questions and collecting information.

Procedures for gathering evidence should be sensitive to cultural conditions in each setting and must ensure that the privacy and confidentiality of the information and sources are protected.

Page 79: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & data collectionAnalyze & draw evaluative conclusionsCommunicate progress & findings

Page 80: June 20 2010 bsi christie

• While designing the evaluation, decide upon what data will be analyzed

• Early consideration of data analysis will help to assure the “right” data are collected

• Early discussions of data analysis can stimulate conversations with stakeholders about “indicators” that will be helpful to have available when answering questions (criteria) & what constitutes “good” performance on these “indicators” (standards)

Page 81: June 20 2010 bsi christie

Once data have been collected and analyzed, stakeholders must decide what to do with this information. That is, state findings. Common next steps include:

– Examine against agreed upon standards

– Think about what this examination means:• With respect to practical significance (interpret)

• Assign claims of merit, worth, or significance (judge)

– Recommend actions that should be considered based upon findings

Page 82: June 20 2010 bsi christie

Consider the following when developing conclusions from analysis of evaluative data.

Use culturally and methodologically appropriate methods of analysis and synthesis to summarize findings;

Interpret the significance of results for deciding what the findings mean;

Make judgments according to clearly stated values that classify a result (e.g., as positive or negative and high or low);

Consider alternative ways to compare results (e.g., compared with program objectives, a comparison group, national norms, past performance, or needs);

Generate alternative explanations for findings and indicate why these explanations should be discounted;

Recommend actions or decisions that are consistent with the conclusions;

Limit conclusions to situations, time periods, persons, contexts, and purposes for which the findings are applicable.

Page 83: June 20 2010 bsi christie

Engage stakeholdersDescribe programEvaluation purpose & questions (focus the

evaluation)Design & Data collectionAnalyze & Draw Evaluative ConclusionsCommunicate progress & findings

Page 84: June 20 2010 bsi christie

Disseminating information about the progress and findings of the evaluation is the final step in the process; however, communication is an important function throughout the evaluation process from beginning to end.

Effective communication ensures that evaluators and program staff benefit from participants’ perspectives on program and evaluation goals and priorities and maximizes the cultural relevance, value, and use of the evaluation.

As with data analysis, communications about the evaluation should not be an afterthought. Communicating throughout the evaluation is important to facilitating use of findings

Aspects of communications to consider include:– Purpose of communication– Intended audiences– Format – Frequency & timing– Deadlines

Page 85: June 20 2010 bsi christie

Communication has multiple purposes that include:

• Promoting collaborative decision making about evaluation design and activities

• Informing those directly involved about specific upcoming evaluation activities

• Keeping those directly and indirectly involved informed about evaluation progress

• Presenting & sometimes collaboratively interpreting initial/interim findings

• Presenting & sometimes collaboratively interpreting final findings

Page 86: June 20 2010 bsi christie

Consider each type of evaluation stakeholder & their cultural values when designing and delivering project communication.

Tailor your communications to the cultural norms and needs of the particular audience for which they are intended.

Page 87: June 20 2010 bsi christie

• Consider the culture & characteristics of audiences when designing and delivering communication:

– Accessibility- Be sure that your intended audience has access to the communication – e.g., if posting communications on a webpage be sure your audience has regular access to a computer with internet access

– Reading ability- Gear written communications to the average reading level for the stakeholder audience for which you are targeting

– Familiarity with the program and or the evaluation- Be sure to explain your program & its evaluation in communications with audiences that may not be familiar with your work

– Role in decision making- Consider whether this audience will be using the information in the communication for decision-making & tailor your communication to these needs

– Familiarity with research and evaluation methods- Consider whether your audience will want a description of your methods & if not, perhaps include as an appendix

– Experience using evaluation findings- Consider engaging in strategies that help to promote use & the extent to which your intended audience may be (un)familiar with thinking about and using evaluation findings in decision making

Page 88: June 20 2010 bsi christie

Verbal Presentations

Written reportsExecutive

SummariesWebsitesPoster SessionsNewsletters

Memos

Postcards

Email

Personal discussions

Working sessions

Informal:

The audience and purpose of the communication activities will also determine their formats. Communication formats can fall into two categories – formal and informal.

Page 89: June 20 2010 bsi christie

Most Interactive with Audience

Least Interactive with Audience

• Working Sessions

•Impromptu or planned meetings with individuals

• Verbal presentations

•Videotape or computer-generated presentations

•Posters

•Internet communications

• Memos & postcards

•Comprehensive written reports

•Executive summaries

•Newsletters, bulletins, brochures

•Newsmedia communications

The level of interactivity with the audience will be impacted by the format of communication.

Page 90: June 20 2010 bsi christie

Document your plan for communications

Audience: Parents & youth at the pilot sites

Purpose Possible formats Timing/Dates

Notes

✓ Include in decision making about evaluation design/activities

Personal discussions/meetings

Early April Pilot parent survey

✓ Inform about specific upcoming evaluation activities

Article included in site newsletter

Early September

Keep informed about progress of the evaluation

As with evaluation design and data collection it is helpful to document the evaluation communication plan, detailing the purpose, formats, timelines and other critical information.

Page 91: June 20 2010 bsi christie

• Evaluation is a systematic process of inquiry• It is imperative to have a clear plan & direction for the

evaluation• Evaluation must provide accurate information

• Evaluation must provide usable information

• The inclusion of stakeholders throughout the evaluation process is critical to planning and conducting a successful evaluation that is culturally responsive and relevant

• Use is necessary for evaluation to be a tool for organizational, policy and social change

• Useful evaluation processes and results inform decisions, clarify options, identify strengths and weaknesses, and provide information on program improvements, policies, and key contextual factors affecting the program and its participants

Page 92: June 20 2010 bsi christie

• Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR 1999; 48 (No. RR-11).

• Fitzpatrick, J.L., Sanders, J.R., & Worthen, B.R. (2004). Program Evaluation: Alternative Approaches and Practical Guidelines (3rd Ed.). Boston: Pearson.

• Patton, M. (1997). Utilization-focused evaluation. Thousand Oaks, CA: Sage.

• Russ-Eft,D. & Preskill, H. (2001). Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance, & Change. New York: Basic Books.

• Scriven, M. (1991). Evaluation thesaurus. Thousand Oaks, CA: Sage• SPAN USA, Inc. (2001). Suicide Prevention: Prevention Effectiveness and

Evaluation. SPAN USA, Washington, DC.• Rossi, P., Lipsey, M., Freeman H. (2004). Evaluation: A systematic

approach. Thousand Oaks, CA: Sage. • Trochim, M.K. (2006). Research Methods Knowledge Base.

http://www.socialresearchmethods.net/kb/index.php• U.S. Department of Health and Human Services. Centers for Disease

Control and Prevention. Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention, 2005.

The author would like to thank Leslie Fierro for sharing her work and ideas in developing this presentation.


Recommended