+ All Categories
Home > Documents > Planning the Intervention Effects Evaluation © 2009 Jones and Bartlett Publishers Chapter 11.

Planning the Intervention Effects Evaluation © 2009 Jones and Bartlett Publishers Chapter 11.

Date post: 02-Jan-2016
Category:
Upload: jewel-hood
View: 217 times
Download: 0 times
Share this document with a friend
Popular Tags:
23
Planning the Intervention Effects Evaluation © 2009 Jones and Bartlett Publishers Chapter 11
Transcript

Planning the InterventionEffects Evaluation

© 2009 Jones and Bartlett Publishers

Chapter 11

Effect Evaluation in the Planning and Evaluation Cycle

© 2009 Jones and Bartlett Publishers

Basis for Decisions about Evaluation Focus and Purpose

Effect theory Logic model Outcome objectives Who the evaluation is for

• e.g., funders, stakeholders, research

© 2009 Jones and Bartlett Publishers

Characteristics of the Right Question

Relevant data can be collected More than 1 answer is possible Produces info that decision makers want

and feel they need

© 2009 Jones and Bartlett Publishers

Outcome Documentation, Assessment, and Evaluation

Documentation• To what extent were the outcome objectives met?

Assessment• To what extent is any noticeable change or difference

in participants related to having received the program interventions?

Evaluation• Were the changes or differences due to participants

having received the program and nothing else?

© 2009 Jones and Bartlett Publishers

Three Levels of Intervention Effects Evaluations

Outcome documentation

Outcome assessment

Outcome evaluation

Purpose Show that outcome and impact objectives were met

Determine whether participants in the program experienced any change/benefit

Determine whether the program caused a change or benefit for the recipients

Relationship to program effect theory

Confirms reaching targets set in the objectives that were based on the theory

Supports the theory Verifies the theory

Level of rigor Minimal Moderate Maximum

Data collection Data type and collection timing based on objectives being measured

Data type based on effect theory; timing based on feasibility

Data type based on effect theory; baseline and post-intervention data are required

© 2009 Jones and Bartlett Publishers

Evaluation vs. Research

Characteristic Research Evaluation

Goal or purpose Generating new knowledge for prediction

Social accounting and program or policy decision making

Questions addressed

Scientist's own questions Derived from program goals and impact objectives

Problem addressed Areas where knowledge is lacking Program impacts and outcomes

Guiding theory Theory used as basis for hypothesis testing

Theory underlying the program interventions, theory of evaluation

Appropriate techniques

Sampling, statistics, hypothesis testing, etc.

Whichever research techniques fit with the problem

Setting Anywhere that is appropriate to the research question

Anywhere evaluators can access the program recipients and controls

Dissemination Scientific journals Internal and externally viewed reports, scientific journals

Allegiance Scientific community Funding source, policy preference, scientific community

© 2009 Jones and Bartlett Publishers

Rigor and Identifying a Program’s Net Effects

© 2009 Jones and Bartlett Publishers

Three Theories Comprising the Program Effect Theory Causal theory

• Existing and causal factors, moderators and mediators, and health outcome

Intervention theory• How the interventions affect the causal, moderating,

and mediating factors

Impact theory• How immediate outcomes become long-term impact

At minimum, evaluation should measure causal factors and outcomes

© 2009 Jones and Bartlett Publishers

Nomenclature for Effect Evaluation Variables

© 2009 Jones and Bartlett Publishers

Dependent (y) Variables

Need to choose most important outcome objectives, not a “fishing expedition”

Typically from the 6 health and well-being domains:• Knowledge, lifestyle behaviors, cognitive

processes, mental health, social health, resources

© 2009 Jones and Bartlett Publishers

Independent (x) Variables

Called “independent” because they are not influenced by the outcome

Start by measuring causal factors May be measured before and/or after a

program, in participants and/or controls

© 2009 Jones and Bartlett Publishers

Moderating and Mediating Variables

Mediating – intervene between x and y Moderating – change strength or

direction of relationship between x and y Including them in the evaluation helps in

understanding what influences intervention effectiveness

© 2009 Jones and Bartlett Publishers

Measurement Considerations

Unit of observation must match level of program• e.g., individuals, schools, communities

Levels of measurement for variables• Nominal, ordinal, interval

Measurement timing Sensitivity of measures

© 2009 Jones and Bartlett Publishers

Pros and Cons of Levels of Measurement

Type Examples Advantage Disadvantage

Nominal, categorical

ZIP code, race, yes/no

Easy to understand Limited information from the data

Ordinal, rank

Social class, Likert scale, “top 10” list (worst to best)

Considerable information, can collapse into nominal categories

Sometimes statistically treated as a nominal variable, ranking can be a difficult task for respondents

Interval, continuous

Temperature, IQ, distances, dollars, inches, age

Most information, can collapse into nominal or ordinal categories

Can be difficult to construct valid and reliable interval variables

© 2009 Jones and Bartlett Publishers

Examples of Nominal, Ordinal, and Interval Variables

Outcome variable

Nominal Ordinal Interval

Childhood immunization

Yes/no up-to-date None required, 1 immunization required, >1 required

Rubella titer

Breastfeeding Yes/no breastfed Category for how long breastfed: <2 weeks, 2-6 weeks, >6 weeks

# of days breastfed

Housing situation

Homeless or not Housing autonomy (own, rent monthly, rent weekly, homeless)

# of days living at current residence

© 2009 Jones and Bartlett Publishers

Example Timeline of Intervention and Evaluation Activities

Month Intervention activity Evaluation activity

1 Pilot intervention with small group

Conduct focus group to refine intervention acceptability and elements of services utilization plan

2 Recruit into program, screen for eligibility

Randomly assign to program or wait list, collect data for baseline and comparison•Participants n=150•Wait listed controls n=150

3 Provide intervention to 1st group of participants

Analyze baseline, pre-intervention data

4 Recruit into program, screen for eligibility

Collect post-intervention data•Participants (time 1) who completed program n=125•New nonparticipant controls from wait list n=130

5 Repeat intervention Analyze data

6 Collect post-intervention data•Previous program participants (time 1) n=95•Current program participants (time 2) n=120•Current nonparticipant controls n=110Analyze data

© 2009 Jones and Bartlett Publishers

Threats to Data Quality

Missing data Reliability

• Instrument issues, individual variability day-to-day, inter-rater agreement, data entry

Validity

© 2009 Jones and Bartlett Publishers

Contextual Considerations in Evaluation Planning

Evaluation budget• Roughly 10 – 20% of implementation budget

Evaluation standards Evaluation ethics Stakeholders’ interests

© 2009 Jones and Bartlett Publishers

Summary of Evaluation ElementsElements of effect evaluation

Science considerations Program considerations

What to evaluate Impact & outcome variables most likely to demonstrate the strength of the evidence for the effect theory

Highest-priority impact and outcome objectives, variables that meet funding agency requirements

Who to evaluate Sample representativeness & comparability to non-participants, ethics of assignment to program or not

Accessibility of program participants, availability of easily accessed target audience members

When to evaluate Effect onset and duration Convenience and accessibility of program participants

Why evaluate Scientific contributions and knowledge generation

Program promotion, program refinement, funding agency requirements

How to evaluate Maximize rigor through choice of measures, design, and analysis

Minimize intrusion of evaluation into program through seamlessness of evaluation with program implementation

© 2009 Jones and Bartlett Publishers

Effect Evaluation across the Pyramid

Direct services level• Evaluation of individuals may be most

straightforward

• Questionnaire construction and secondary data analysis are main considerations

Enabling services level• Similar to direct services level

• How to identify participants and choosing the right unit of observation are main issues

© 2009 Jones and Bartlett Publishers

Effect Evaluation across the Pyramid, Continued

Population-based services level• Major issues are aggregation of data and

selecting the unit of observation

Infrastructure level• Evaluation itself is an infrastructure process

• If the program affects infrastructure, then may need to collect individual-level data

• May need to develop infrastructure measures

© 2009 Jones and Bartlett Publishers

Coming Up… April 19

• Chapter 12 (Chapter 13 covered in Research Methods)

April 26• Chapter 14 (Chapter 15 covered in Research Methods)

May 3• Final Group Presentation

• You will present the entire Proposal in 30 minutes. Be creative. Q&A by me as well.

• Course Evaluation

• Group Evaluation

May 11• Final Exam

© 2009 Jones and Bartlett Publishers


Recommended