+ All Categories
Home > Documents > Monitoring & Evaluation Detailed Note

Monitoring & Evaluation Detailed Note

Date post: 08-Apr-2018
Category:
Upload: habibqayumi
View: 221 times
Download: 0 times
Share this document with a friend
117
Glossary baseline data - Initial information on program participants or other program aspects collected prior to receipt of services or program intervention. Baseline data are often gathered through intake interviews and observations and ar e used later for co mparing measures that determine changes in your participants, program, or environment. bias - (ref ers to statistic al bias ). Inaccurate repre sent ation that produces systematic error in a research finding. Bias may result in overestimati ng or und erestimat ing certain cha racteristics of the pop ula tio n. It may res ult fro m incomple te inf ormati on or inv ali d collection methods and may be intentional or unintentional. comparison group - Individuals whose characteristics (such as race/ethni ci ty, gender, and age) are si mi lar to those of your program participants. These individuals may not receive any services, or they may receive a different set of services, activities, or products. In no instance do they receive the same service(s) as those you are evaluating. As part of the evaluation process, the experimental (or treatment) group and the comparison group are assessed to determine which type of services, activities, or products provided by your program produced the expected changes. confidentiality - Since an evalua tio n may entail exchanging or gathe ring privil eged or sensitive info rmation abou t indiv idual s, a written form that assures evaluation participants that information provided will not be openly disclosed nor associated with them by name is important. Such a form ensures that their privacy will be maintained. consultant - An indivi dua l who pro vides exp ert or pro fes sio nal advice or services, often in a paid capacity. control group - A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of you r pro gra m par ticipa nts, but do not receiv e the pro gra m (servi ces, products, or activities) you are evaluating. Participants are randomly assigned to either the treatment (or program) group and the control group. A control group is used to assess the effect of your program on participants as compared to similar individuals not receiving the services, products, or activities you are evaluating. The same information is collected for people in the control group as in the experimental group. cost-benefit analysis - A type of analysis that involves comparing the relative costs of operating a program (program expenses, staff salaries, etc.) to the benefi ts (gains to individuals or society) it
Transcript

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 1/117

Glossary

baseline data - Initial information on program participants or otherprogram aspects collected prior to receipt of services or programintervention. Baseline data are often gathered through intakeinterviews and observations and are used later for comparingmeasures that determine changes in your participants, program, orenvironment.

bias - (refers to statistical bias). Inaccurate representation thatproduces systematic error in a research finding. Bias may result inoverestimating or underestimating certain characteristics of thepopulation. It may result from incomplete information or invalidcollection methods and may be intentional or unintentional.

comparison group - Individuals whose characteristics (such asrace/ethnicity, gender, and age) are similar to those of yourprogram participants. These individuals may not receive anyservices, or they may receive a different set of services, activities,or products. In no instance do they receive the same service(s) asthose you are evaluating. As part of the evaluation process, theexperimental (or treatment) group and the comparison group areassessed to determine which type of services, activities, or productsprovided by your program produced the expected changes.

confidentiality - Since an evaluation may entail exchanging or

gathering privileged or sensitive information about individuals, awritten form that assures evaluation participants that informationprovided will not be openly disclosed nor associated with them byname is important. Such a form ensures that their privacy will bemaintained.

consultant - An individual who provides expert or professionaladvice or services, often in a paid capacity.

control group - A group of individuals whose characteristics (suchas race/ethnicity, gender, and age) are similar to those of your

program participants, but do not receive the program (services,products, or activities) you are evaluating. Participants arerandomly assigned to either the treatment (or program) group andthe control group. A control group is used to assess the effect of your program on participants as compared to similar individuals notreceiving the services, products, or activities you are evaluating.The same information is collected for people in the control group asin the experimental group.

cost-benefit analysis - A type of analysis that involves comparing

the relative costs of operating a program (program expenses, staff salaries, etc.) to the benefits (gains to individuals or society) it

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 2/117

generates. For example, a program to reduce cigarette smokingwould focus on the difference between the dollars expended forconverting smokers into nonsmokers with the dollar savings fromreduced medical care for smoking related disease, days lost fromwork, and the like.

cost effectiveness analysis - A type of analysis that involvescomparing the relative costs of operating a program with the extentto which the program met it goals and objectives. For example, aprogram to reduce cigarette smoking would estimate the dollarsthat had to be expended in order to convert each smoker into anonsmoker.

cultural relevance - Demonstration that evaluation methods,procedures, and or instruments are appropriate for the culture(s) towhich they are applied. (Other terms include cultural competency,cultural sensitivity).

culture - The shared values, traditions, norms, customs, arts,history, institutions, and experience of a group of people. The groupmay be identified by race, age, ethnicity, language, national origin,religion, or other social category or grouping.

data - Specific information or facts that are collected. A data item isusually a discrete or single measure. Examples of data items mightinclude age, date of entry into program, or reading level. Sources of 

data may include case records, attendance records, referrals,assessments, interviews, and the like.

data analysis - The process of systematically applying statisticaland logical techniques to describe, summarize, and compare datacollected.

data collection instruments - Forms used to collect informationfor your evaluation. Forms may include interview instruments,intake forms, case logs, and attendance records. They may bedeveloped specifically for your evaluation or modified from existing

instruments. A professional evaluator can help select those that aremost appropriate for your program.

data collection plan - A written document describing the specificprocedures to be used to gather the evaluation information or data.The plan describes who collects the information, when and where itis collected, and how it is to be obtained.

database - An accumulation of information that has beensystematically organized for easy access and analysis. Databasestypically are computerized.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 3/117

design - The overall plan and specification of the approachexpected in a particular evaluation. The design describes how youplan to measure program components and how you plan to use theresulting measurements. A pre- and post-intervention design withor without a comparison or control group is the design needed to

evaluate participant outcome objectives.

evaluation - A systematic method for collecting, analyzing, andusing information to answer basic questions about your program. Ithelps to identify effective and ineffective services, practices, andapproaches.

evaluator - An individual trained and experienced in designing andconducting an evaluation that uses tested and accepted researchmethodologies.

evaluation plan - A written document describing the overallapproach or design you anticipate using to guide your evaluation. Itincludes what you plan to do, how you plan to do it, who will do it,when it will be done, and why the evaluation is being conducted.The evaluation plan serves as a guide for the evaluation.

evaluation team -The individuals, such as the outside evaluator,evaluation consultant, program manager, and program staff whoparticipate in planning and conducting the evaluation. Teammembers assist in developing the evaluation design, developing

data collection instruments, collecting data, analyzing data, andwriting the report.

exit data - Information gathered after an individual leaves yourprogram. Exit data are often compared to baseline data. Forexample, a Head Start program may complete a developmentalassessment of children at the end of the program year to measure achild's developmental progress by comparing developmental statusat the beginning and end of the program year.

experimental group - A group of individuals receiving the

treatment or intervention being evaluated or studied. Experimentalgroups (also known as treatment groups) are usually compared to acontrol or comparison group.

focus group - A group of 7-10 people convened for the purpose of obtaining perceptions or opinions, suggesting ideas, orrecommending actions. A focus group is a method of collecting datafor evaluation purposes.

formative evaluation - A type of process evaluation of newprograms or services that focuses on collecting data on program

operations so that needed changes or modifications can be made tothe program in its early stages. Formative evaluations are used to

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 4/117

provide feedback to staff about the program components that areworking and those that need to be changed.

immediate outcomes - The changes in program participants,knowledge, attitudes, and behavior that occur early in the course of 

the program. They may occur at certain program points, or atprogram completion. For example, acknowledging substance abuseproblems is an immediate outcome.

impact evaluation - A type of outcome evaluation that focuses onthe broad, longer-term impacts or results of a program. Forexample, an impact evaluation could show that a decrease in acommunity's overall infant mortality rate was the direct result of aprogram designed to provide early prenatal care.

in-kind service - Time or services donated to your program.

informed consent - A written agreement by program participantsto voluntarily participate in an evaluation or study after having beenadvised of the purpose of the study, the type of information beingcollected, and how the information will be used.

instrument - A tool used to collect and organize information.Includes written instruments or measures, such as questionnaires,scales, and tests.

intermediate outcomes - Results or outcomes of a program or

treatment that may require some time before they are realized. Forexample, part-time employment would be an intermediate outcomeof a program designed to assist at-risk youth in becoming self-sufficient.

internal resources - An agency's or organization's resourcesincluding staff skills and experiences and any information youalready have available through current program activities.

intervention - The specific services, activities, or productsdeveloped and implemented to change or improve program

participants' knowledge, attitudes, behaviors, or awareness.

logic model - See the definition for program model.

management information system (MIS) - An informationcollection and analysis system, usually computerized, that facilitatesaccess to program and participant information. It is usuallydesigned and used for administrative purposes. The types of information typically included in an MIS are service deliverymeasures, such as session, contacts, or referrals; staff caseloads;client sociodemographic information; client status; and treatment

outcomes. Many MIS can be adapted to meet evaluationrequirements.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 5/117

measurable terms - Specifying, through clear language, what it isyou plan to do and how you plan to do it. Stating time periods foractivities, "dosage" or frequency information (such as three 1-hourtraining sessions), and number of participants helps to make projectactivities measurable.

methodology - The way in which you find out information; amethodology describes how something will be (or was) done. Themethodology includes the methods, procedures, and techniquesused to collect and analyze information.

monitoring - The process of reviewing a program or activity todetermine whether set standards or requirements are being met.Unlike evaluation, monitoring compares a program to an ideal orexact state.

objective - A specific statement that explains how a program goalwill be accomplished. For example, an objective of the goal toimprove adult literacy could be to provide tutoring to participants ona weekly basis for 6 months. An objective is stated so that changes,in this case, an increase in a specific type of knowledge, can bemeasured and analyzed. Objectives are written using measurableterms and are time-limited.

outcome - Outcomes are a result of the program, services, orproducts you provide and refer to changes in knowledge, attitude,

or behavior in participants. They are referred to as participantoutcomes in this manual.

outcome evaluation - Evaluation designed to assess the extent towhich a program or intervention affects participants according tospecific variables or data elements. These results are expected tobe caused by program activities and tested by comparison of resultsacross sample groups in the target population. Also known asimpact and summative evaluation.

outcome objectives - The changes in knowledge, attitudes,

awareness, or behavior that you expect to occur as a result of implementing your program component, service, or activity. Alsoknown as participant outcome objectives.

outside evaluator - An evaluator not affiliated with your agencyprior to the program evaluation. Also known as a third-partyevaluator.

participant - An individual, family, agency, neighborhood,community, or State, receiving or participating in services providedby your program. Also known as a client or target population group.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 6/117

pilot test - Preliminary test or study of your program or evaluationactivities to try out procedures and make any needed changes oradjustments. For example, an agency may pilot test new datacollection instruments that were developed for the evaluation.

posttest - A test or measurement taken after a service orintervention takes place. It is compared with the results of a pretestto show evidence of the effects or changes as a result of the serviceor intervention being evaluated.

pretest - A test or measurement taken before a service orintervention begins. It is compared with the results of a posttest toshow evidence of the effects of the service or intervention beingevaluated. A pretest can be used to obtain baseline data.

process evaluation - An evaluation that examines the extent to

which a program is operating as intended by assessing ongoingprogram operations and whether the targeted population is beingserved. A process evaluation involves collecting data that describesprogram operations in detail, including the types and levels of services provided, the location of service delivery, staffing;sociodemographic characteristics of participants; the community inwhich services are provided, and the linkages with collaboratingagencies. A process evaluation helps program staff identify neededinterventions and/or change program components to improveservice delivery. It is also called formative or implementation

evaluation.

program implementation objectives - What you plan to do inyour program, component, or service. For example, providingtherapeutic child care for 15 children, giving them 2 hot meals perday, are referred to as program implementation objectives.

program model (or logic model) - A diagram showing the logicor rationale underlying your particular program. In other words, it isa picture of a program that shows what it is supposed toaccomplish. A logic model describes the links between program

objectives, program activities, and expected program outcomes.

qualitative data - Information that is difficult to measure, count,or express in numerical terms. For example, a participant'simpression about the fairness of a program rule/requirement isqualitative data.

quantitative data - Information that can be expressed innumerical terms, counted or compared on a scale. For example,improvement in a child's reading level as measured by a readingtest.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 7/117

random assignment - The assignment of individuals in the pool of all potential participants to either the experimental (treatment) orcontrol group in such a manner that their assignment to a group isdetermined entirely by chance.

reliability - Extent to which a measurement (such as an instrumentor a data collection procedure) produces consistent results overrepeated observations or administrations of the instrument underthe same conditions each time. It is also important that reliability bemaintained across data collectors; this is call interrater reliability.

sample - A subset of participants selected from the total studypopulation. Samples can be random (selected by chance, such asevery 6th individual on a waiting list) or nonrandom (selectedpurposefully, such as all 2-year olds in a Head Start program).

standardized instruments - Assessments, inventories,questionnaires, or interviews, that have been tested with a largenumber of individuals and are designed to be administered toprogram participants in consistent manner. Results of tests withprogram participants can be compared to reported results of thetests used with other populations.

statistical procedures - The set of standards and rules based instatistical theory, by which one can describe and evaluate what hasoccurred.

statistical test - Type of statistical procedure, such as a t-test orZ-score, that is applied to data to determine whether your resultsare statistically significant (i.e., the outcome is not likely to haveresulted by chance alone).

summative evaluation - A type of outcome evaluation thatassesses the results or outcomes of a program. This type of evaluation is concerned with a program's overall effectiveness.

treatment group - Also called an experimental group, a treatmentgroup is composed of a group of individuals receiving the services,products, or activities (interventions) that you are evaluating.

validity - The extent to which a measurement instrument or testaccurately measures what it is supposed to measure. For example,a reading test is a valid measure of reading skills, but is not a validmeasure of total language competency.

variables - Specific characteristics or attributes, such as behaviors,age, or test scores, that are expected to change or vary. Forexample, the level of adolescent drug use after being exposed to adrug prevention program is one variable that may be examined inan evaluation.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 8/117

Introduction to Evaluation

Evaluation is a methodological area that is closely related to, butdistinguishable from more traditional social research. Evaluation utilizes manyof the same methodologies used in traditional social research, but becauseevaluation takes place within a political and organizational context, it requiresgroup skills, management ability, political dexterity, sensitivity to multiplestakeholders and other skills that social research in general does not rely onas much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

Definitions of Evaluation

Probably the most frequently given definition is:

Evaluation is the systematic assessment of the worth or merit of someobject 

This definition is hardly perfect. There are many types of evaluations that donot necessarily  result in an assessment of worth or merit -- descriptivestudies, implementation analyses, and formative evaluations, to name a few.Better perhaps is a definition that emphasizes the information-processing andfeedback functions of evaluation. For instance, one might say:

Evaluation is the systematic acquisition and assessment of informationto provide useful feedback about some object 

Both definitions agree that evaluation is a systematic endeavor and both usethe deliberately ambiguous term 'object' which could refer to a program,policy, technology, person, need, activity, and so on. The latter definitionemphasizes acquiring and assessing information rather than assessing worthor merit  because all evaluation work involves collecting and sifting throughdata, making judgements about the validity of the information and of 

inferences we derive from it, whether or not an assessment of worth or meritresults.

The Goals of Evaluation

The generic goal of most evaluations is to provide "useful feedback" to avariety of audiences including sponsors, donors, client-groups, administrators,staff, and other relevant constituencies. Most often, feedback is perceived as"useful" if it aids in decision-making. But the relationship between anevaluation and its impact is not a simple one -- studies that seem criticalsometimes fail to influence short-term decisions, and studies that initially

seem to have no influence can have a delayed impact when more congenialconditions arise. Despite this, there is broad consensus that the major goal of 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 9/117

evaluation should be to influence decision-making or policy formulationthrough the provision of empirically-driven feedback.

Evaluation Strategies

'Evaluation strategies' means broad, overarching perspectives on evaluation.They encompass the most general groups or "camps" of evaluators; although,at its best, evaluation work borrows eclectically from the perspectives of allthese camps. Four major groups of evaluation strategies are discussed here.

Scientific-experimental models are probably the most historically dominantevaluation strategies. Taking their values and methods from the sciences --especially the social sciences -- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated.Included under scientific-experimental models would be: the tradition of experimental and quasi-experimental designs; objectives-based research thatcomes from education; econometrically-oriented perspectives including cost-effectiveness and cost-benefit analysis; and the recent articulation of theory-driven evaluation.

The second class of strategies are management-oriented systems models.Two of the most common of these are PERT, the Program Evaluation andReview Technique, and CPM, the Critical Path Method. Both have beenwidely used in business and government in this country. It would also belegitimate to include the Logical Framework or "Logframe" model developedat U.S. Agency for International Development and general systems theory and

operations research approaches in this category. Two management-orientedsystems models were originated by evaluators: the UTOS model where Ustands for Units, T for Treatments, O for Observing Observations and S for Settings; and the CIPP model where the C stands for Context, the I for Input,the first P for Process and the second P for Product. These management-oriented systems models emphasize comprehensiveness in evaluation,placing evaluation within a larger framework of organizational activities.

The third class of strategies are the qualitative/anthropological models.They emphasize the importance of observation, the need to retain thephenomenological quality of the evaluation context, and the value of 

subjective human interpretation in the evaluation process. Included in thiscategory are the approaches known in evaluation as naturalistic or 'FourthGeneration' evaluation; the various qualitative schools; critical theory and artcriticism approaches; and, the 'grounded theory' approach of Glaser andStrauss among others.

Finally, a fourth class of strategies is termed participant-oriented models.As the term suggests, they emphasize the central importance of theevaluation participants, especially clients and users of the program or technology. Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 10/117

With all of these strategies to choose from, how to decide? Debates that ragewithin the evaluation profession -- and they do rage -- are generally battlesbetween these different strategists, with each claiming the superiority of their position. In reality, most good evaluators are familiar with all four categoriesand borrow from each as the need arises. There is no inherent incompatibility

between these broad strategies -- each of them brings something valuable tothe evaluation table. In fact, in recent years attention has increasingly turnedto how one might integrate results from evaluations that use differentstrategies, carried out from different perspectives, and using differentmethods. Clearly, there are no simple answers here. The problems arecomplex and the methodologies needed will and should be varied.

Types of Evaluation

There are many different types of evaluations depending on the object beingevaluated and the purpose of the evaluation. Perhaps the most importantbasic distinction in evaluation types is that between formative andsummative evaluation. Formative evaluations strengthen or improve theobject being evaluated -- they help form it by examining the delivery of theprogram or technology, the quality of its implementation, and the assessmentof the organizational context, personnel, procedures, inputs, and so on.Summative evaluations, in contrast, examine the effects or outcomes of someobject -- they summarize it by describing what happens subsequent todelivery of the program or technology; assessing whether the object can besaid to have caused the outcome; determining the overall impact of the causalfactor beyond only the immediate target outcomes; and, estimating the

relative costs associated with the object.

Formative evaluation includes several evaluation types:

• needs assessment determines who needs the program, how great theneed is, and what might work to meet the need

• evaluability assessment determines whether an evaluation is feasibleand how stakeholders can help shape its usefulness

• structured conceptualization helps stakeholders define the programor technology, the target population, and the possible outcomes

• implementation evaluation monitors the fidelity of the program or 

technology delivery• process evaluation investigates the process of delivering the program

or technology, including alternative delivery procedures

Summative evaluation can also be subdivided:

• outcome evaluations investigate whether the program or technologycaused demonstrable effects on specifically defined target outcomes

• impact evaluation is broader and assesses the overall or net effects --intended or unintended -- of the program or technology as a whole

• cost-effectiveness and cost-benefit analysis address questions of 

efficiency by standardizing outcomes in terms of their dollar costs andvalues

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 11/117

• secondary analysis reexamines existing data to address newquestions or use methods not previously employed

• meta-analysis integrates the outcome estimates from multiple studiesto arrive at an overall or summary judgement on an evaluation question

Evaluation Questions and Methods

Evaluators ask many different kinds of questions and use a variety of methodsto address them. These are considered within the framework of formative andsummative evaluation as presented above.

In formative research the major questions and methodologiesare:

What is the definition and scope of the problem or issue, or what's the

question? 

Formulating and conceptualizing methods might be used includingbrainstorming, focus groups, nominal group techniques, Delphi methods,brainwriting, stakeholder analysis, synectics, lateral thinking, input-outputanalysis, and concept mapping.

Where is the problem and how big or serious is it? 

The most common method used here is "needs assessment" which caninclude: analysis of existing data sources, and the use of sample surveys,

interviews of constituent populations, qualitative research, expert testimony,and focus groups.

How should the program or technology be delivered to address theproblem? 

Some of the methods already listed apply here, as do detailing methodologieslike simulation techniques, or multivariate methods like multiattribute utilitytheory or exploratory causal modeling; decision-making methods; and projectplanning and implementation methods like flow charting, PERT/CPM, andproject scheduling.

How well is the program or technology delivered? 

Qualitative and quantitative monitoring techniques, the use of managementinformation systems, and implementation assessment would be appropriatemethodologies here.

The questions and methods addressed under summativeevaluation include:

What type of evaluation is feasible? 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 12/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 13/117

Monitoring provides information that will be useful in: 

• Analyzing the situation in the community and its project; 

• Determining whether the inputs in the project are well utilized; 

• Identifying problems facing the community or project and finding solutions; 

• Ensuring all activities are carried out properly by the right people and in time;  • Using lessons from one project experience on to another; and  

• Determining whether the way the project was planned is the most appropriate  way of solving the problem at hand.

Monitoring, Planning and Implementation

Integrating the Monitoring at All Stages

Monitoring is an integral part of every project, from start to finish..

A project is a series of activities(investments) that aim at solvingparticular problems within agiven time frame and in aparticular location.

.

The investments include time, money, humanand material resources. Before achieving theobjectives, a project goes through severalstages. Monitoring should take place at and beintegrated into all stages of the project cycle.

.. 

The three basic stages include: 

• Project planning (situation analysis, problem identification, definitionof the goal, formulating strategies, designing a work plan, and budgeting ); 

• Project implementation (mobilization, utilization and control of 

resources and project operation); and  

• Project evaluation. .. 

Monitoring should be executed by allindividuals and institutions whichhave an interest (stake holders) in theproject.

.

To efficiently implement a project, the peopleplanning and implementing it should plan for all the interrelated stages from thebeginning.

.. 

In the "Handbook for Mobilizers," we said the key questions of planning and management were: (1) What do we want? (2)

What do we have? (3) How do we use what we have to getwhat we want? and (4) What will happen when we do?

.

They can be modified,using "where," insteadof  "what,"  while theprinciples are thesame.

.. 

The questions become: Where are we? Where do we want to go?

How do we get there? and 

What happens as we do?

.. 

Situation Analysis and ProblemDefinition:

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 14/117

.. 

This asks the question, "Where are we?" (What do we have?).

.. 

Situation analysis is a process through which the generalcharacteristics and problems of the community are identified.It involves the identification and definition of thecharacteristics and problems specific to particular categoriesof people in the community.

.

These could bepeople withdisabilities, women,youth, peasants,traders and artisans.

.. 

Situation analysis is done throughcollecting information necessary tounderstand the community as a wholeand individuals within the community.

.

Information should be collected on whathappened in the past, what is currentlyhappening, and what is expected to happenin the future, based on the community'sexperiences.

.. 

Information necessary to understand the community includes, among others: 

Population characteristics (eg sex, age, tribe, religion and family sizes); • Political and administrative structures (eg community committees and local 

councils); • Economic activities (including agriculture, trade and fishing); 

• Cultural traditions (eg inheritance and the clan system), transitions

(eg marriages, funeral rites), and rites of passage (eg circumcision); • On-going projects like those of sub-county, district, central Government, 

non Governmental organizations (NGOs), and community based organizations(CBOs); 

• Socio-economic infrastructure or communal facilities, (eg schools, health units, and access roads); and  

• Community organizations (eg savings and credit groups, women groups, self-help groups and burial groups), their functions and activities.

.. 

Information for situation analysis and problemdefinition should be collected with the involvementof the community members using severaltechniques.

.

This is to ensure valid, reliableand comprehensiveinformation about thecommunity and its problems.

.. 

Some of the following techniques could beused: 

• Documents review; 

• Surveys; • Discussions with individuals, specific

groups and the community as a whole; • Interviews; 

• Observations; 

• Listening to people; 

• Brainstorming; 

• Informal conversations; • Village social, resources, services and

opportunities; • Transect walks, maps; and 

• Problem tree. 

. Situation analysis is very importantbefore any attempts to solve theproblem because: 

• It provides an opportunity tounderstand the dynamics of the community; 

• It helps to clarify social,economic, cultural andpolitical conditions; 

• It provides an initialopportunity for people'sparticipation in all projectactivities; 

• It enables the definition of community problems andsolutions; and 

 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 15/117

• It provides informationneeded to determineobjectives, plan andimplement. 

.. Situation analysis should be continuous, in order toprovide additional information during projectimplementation, monitoring and re-planning. Situationanalysis and problem identification should bemonitored to ensure that correct and up datedinformation is always available about the communityand its problems.

.

Since monitoring should beintegrated into all aspectsor phases of the process,let us go through eachphase and look at themonitoring concernsassociated with each.

.. 

Setting Goals andObjectives:.. 

Goal setting asks the question, "Where do we want to go?" (What do we want?)... 

Before any attempts to implement a project, theplanners, implementers and beneficiaries shouldset up goals and objectives. See Brainstorm for aparticipatory method to do this.

..

.

A goal is a general statement of what should be done to solve aproblem. It defines broadly,what is expected out of aproject.

.. 

A goal emerges from the problem that needsto be addressed and signals the finaldestination of a project.

.Objectives are finite sub-sets of a goaland should be specific, in order to beachievable.

.. 

The objectives should be "SMART." They should be: Specific: clear about what, where, when, and how the situation will be changed;

Measurable: able to quantify the targets and benefits; Achievable: able to attain the objectives (knowing the resources and capacities at the disposal of the community ); Realistic: able to obtain the level of change reflected in the objective; and  Time bound: stating the time period in which they will each be accomplished.

.. 

To achieve the objectives of aproject, it is essential to assessthe resources available within

the community and those thatcan be accessed from externalsources. See  Revealing HiddenResources.

.

The planners, implementers and communitymembers should also identify the constraints theymay face in executing the project and how they

can overcome them. Based on the extent of theconstraints and positive forces, the implementersmay decide to continue with the project or to dropit.

.. 

The goals and objectives provide thebasis for monitoring and evaluating aproject.

.They are the yardsticks upon which projectsuccess or failure is measured.

... 

Generating Structures andStrategies:

.. This aspect asks the third key question, "How do we get 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 16/117

there?"  (How do we get what we want with what we have?).

.. 

The planners andimplementers (communities

and their enablers) shoulddecide on how they are goingto implement a project, whichis the strategy.

.

Agreeing on the strategy involves determining allitems (inputs) that are needed to carry out the

project, defining the different groups or individualsand their particular roles they are to play in theproject. These groups and individuals thatundertake particular roles in the project are called"actors."

.. 

Generating the structures and strategies therefore involves: 

• Discussing and agreeing on the activities to be undertaken duringimplementation; 

• Defining the different actors and outside the community, and their roles; and  

• Defining and distributing costs and materials necessary to implement theproject. 

.. 

After establishing the appropriateness of the decisions,the executive should discuss and agree with all actors onhow the project will be implemented.

.This is called designinga work plan. (How do weget what we want?).

.. 

A work plan is a description of the necessary activities set out in stages, with roughindication of the timing.

.. 

In order to draw a good work plan, the implementers should: 

• List all the tasks required to implement a project; 

• Put the tasks in the order in which they will be implemented; 

• Show allocation of the responsibilities to the actors; and  

• Give the timing of each activity. .. 

The work plan is a guide to project implementation and abasis for project monitoring. It therefore helps to: 

• Finish the project in time; 

• Do the right things in the right order; 

• Identify who will be responsible for what activity; and  

• Determine when to start project implementation. .. 

The implementers and planners have to agreeon monitoring indicators. Monitoringindicators are quantitative and qualitativesigns (criteria) for measuring or assessing theachievement of project activities andobjectives.

.

The indicators will show the extentto which the objectives of everyactivity have been achieved.Monitoring indicators should beexplicit, pertinent and objectivelyverifiable.

.. 

Monitoring Indicators are of four types, namely; 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 17/117

• Input indicators: describe what goes on in the project (eg number of bricks brought on site and amount of money spent); 

• Output indicators: describe the project activity (eg number of classroomsbuilt); 

• Outcome indicators: describe the product of the activity (eg number of pupils attending the school); and  

• Impact indicators: measure change in conditions of the community  (eg reduced illiteracy in the community).

.. 

Writing down the structures and strategieshelps in project monitoring because theyspecify what will be done during projectimplementation.

.

Planning must indicate what shouldbe monitored, who should monitor,and how monitoring should beundertaken.

... 

Implementation:.. 

Monitoring implementation asks the fourth key question "What happens when we do?"

.. 

Implementation is the stage where all theplanned activities are put into action. Beforethe implementation of a project, theimplementers (spearheaded by the project committee or executive) should identifytheir strength and weaknesses (internal forces), opportunities and threats (external forces).

.

The strength and opportunities arepositive forces that should beexploited to efficiently implement aproject. The weaknesses and threatsare hindrances that can hamper project implementation. Theimplementers should ensure that theydevise means of overcoming them.

.. Monitoring is important at thisimplementation phase to ensure thatthe project is implemented as per theschedule. This is a continuous processthat should be put in place beforeproject implementation starts.

..

.

As such, the monitoring activities shouldappear on the work plan and shouldinvolve all stake holders. If activities arenot going on well, arrangements should bemade to identify the problem so that theycan be corrected.

.. 

Monitoring is also important to ensure that activitiesare implemented as planned. This helps theimplementers to measure how well they areachieving their targets. This is based on the

understanding that the process through which aproject is implemented has a lot of effect on its use,operation and maintenance.

.

Therefore implementation of the project on target is notsatisfactory hence a need for implementers to askthemselves and answer thequestion, "How well do weget there?"  (What happenswhen we do?).

... 

Summary of theRelationship:.. 

The above illustrates the close relationship between monitoring, planning andimplementation.  It demonstrates that: 

• Planning describes ways which implementation and monitoring should bedone; 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 18/117

• Implementation and monitoring are guided by the project work plan; and  

• Monitoring provides information for project planning and implementation. .. 

There is a close and mutually

reinforcing (supportive) relationshipbetween planning, implementationand monitoring.

.

One of the three cannot be done in isolation

from the other two, and when doing one of the three, the planners and implementershave to cater for the others.

Beyond Monitoring; Evaluation

Evaluating Achievements.. 

The Meaning of Evaluation:

Evaluation is a process of judging value onwhat a project or program has achievedparticularly in relation to activities plannedand overall objectives.

.

It involves value judgment andhence it is different from monitoring(which is observation and reporting of observations)., Text

.. 

Purpose of  

Evaluation:Evaluation is important to

identify the constraints or 

bottlenecks that hinder theproject in achieving itsobjectives. Solutions to theconstraints can then beidentified and implemented.

.

Assessing the benefits and costs that accrue to theintended direct and indirect beneficiaries of theproject. If the project implemented is for example, theprotection of a spring, evaluation highlights thepeople who fetch and use water and the peoplewhose land is wasted and whose crops are destroyedduring the process of water collection.

..

Drawing lessons from the projectimplementation experience and using thelessons in re-planning of projects in that

community and elsewhere; and 

.

Providing a clear picture of the extentto which the intended objectives of the activities and project have been

realized... 

The Process of Evaluation:.. 

Evaluation canand should bedone: (a)before, (b)during, and (c)after implementation.

. Before project implementation, evaluation is needed in order to: 

• Assess the possible consequences of the plannedproject(s) to the people in the community over a period of time; 

• Make a final decision on what project alternative should beimplemented; and  

• Assist in making decisions on how the project will be

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 19/117

implemented. ..

During project implementation:Evaluation should be a continuousprocess and should take place inall project implementationactivities.

.

This enables the project planners andimplementers to progressively review theproject strategies according to the changingcircumstances in order to attain the desiredactivity and project objectives.

..

After project implementation: This is to retrace the project planning andimplementationprocess, and results after project implementation. This further helps in: 

• Identifying constraints or bottlenecks inherent in the implementation phase; 

• Assessing the actual benefits and the number of people who benefited; 

• Providing ideas on the strength of the project, for replication; and  

Providing a clear picture of the extent to which the intended objectivesof the project have been realized.

Management Information

and Information Management

 How to handle the information that monitoring generates

.. 

Management information andinformation management aredifferent; management information isa kind of information (the data);information management is a kind of management (the system).

.

Information management is the process of 

analyzing and using information which hasbeen collected and stored in order to enablemanagers (at all levels) to make informeddecisions. Management information is theinformation needed in order to makemanagement decisions.

.. 

Monitoring providesinformation about whatis going on in theproject.

.

This information is collected during the planning andimplementation phases. The information helps to detect if anything is going wrong in the project. Management cantherefore find solutions to ensure success.

.. 

The Importance of ManagementInformation:.. 

Management Information is important to: 

• Make decisions necessary to improve management of facilities and services;and  

• Implement participatory planning, implementation, monitoring and evaluation. .. 

How to Use Information

Management:.. 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 20/117

To be able to use information to makemanagement decisions, theinformation should be managed(collected, stored and analyzed).Whereas information management(the process of collecting and storing 

information) and managementinformation (the information needed to make informed decisions) aredifferent; they always reinforce eachother and cannot be separated in dayto day operations.

.

Management information therefore involves: 

• determining information needed; 

• collecting and analyzing information;

storing and retrieving it whenneeded; • using it; and  

• Disseminating it. 

.. 

Determining Information Needed for Management: During project planning,management and monitoring, much informationis generated. Some is needed for makingmanagement decisions on spot; other for later 

management decisions.

.

A good management informationsystem should therefore assist theproject managers to know theinformation they need to collect,for different management

decisions at different times... 

Collecting and Analyzing Information for Information Management: Information canbe got from reports of technical people,village books; forms filled by the differentactors, community meetings, interviews,observation and community maps.

.

Storing Information: It is important tostore information for further references. Information can be storedin the village book, project reports,forms and in the mind. The major principle in information storage is theease in which it can be retrieved.

.. 

Using Information:Information can be used for 

solving communityproblems, determiningresources (amount and nature), soliciting for their support and determiningfuture projects.

.

.

Dissemination or Flow of Information: For informationto be adequately used it needs to be shared with

other stake holders or users. The other stake holderscan also use this information for their managementdecisions and they can help the one collectinginformation to draw meaning and use out of it for management purposes.

.. 

Information should be sharedbetween the village, parish, sub-county, district, national office,NGOs and the donor.

.

Management information is part and parcel of monitoring because such information is gotduring monitoring and helps in the planning andimplementation of monitoring activities.

.. 

Whether it is from the staff or stakeholders, one of the most effective ways of gettinguseful monitoring information is through the Annual Review. Although it is describedin its role of getting participatory management information, it is equally applicable inobtaining monitoring information.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 21/117

Participation in Project Monitoring

The Roles of  Stakeholders

All stake holders have a stake in knowing how well things are going

.. 

Monitoring is a vitalmanagement & implementationrole that cannot be left to onlyone stake holder.

.As many individuals and institutions as possiblethat have any interest in the project, at all levels,should participate in monitoring.

.. 

As with community participation andparticipatory management, participation inmonitoring does not happened spontaneously.

...The persons whom you want toparticipate must be encouragedand trained to participate.

 .. 

Advantages of  Participation:.. 

The advantages of participation in monitoring include: (a) a commonundertaking, (b) enhancing accountability, (c) better decisions, (d)performance improvement, (e) improved design, and (f) more information.

.. 

Common Understanding of Problems andIdentification of Solutions: Participative monitoringhelps stake holders to get a shared understandingof the problems facing the community or project

(their causes, magnitude, effects and implications).

.

This facilitates theidentification of solutions.These solutions are more likelyto be appropriate because they

are derived from a currentsituation.

.. 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 22/117

Benefits the Target Groups andEnhances Accountability:Participation in monitoringensures that the people to whichthe project was intended are theones benefiting from it.

.

It increases the awareness of people's rights,which elicits their participation in guardingagainst project resource misappropriation.Guarding against resource misappropriationmakes project implementation less expensive.

.. 

Making AppropriateDecisions: Monitoringprovides informationnecessary in makingmanagementdecisions.

.

When many people participate in monitoring it means thatthey have participated in providing managementinformation and contributed to decision making. Thedecisions from this are more likely to be acceptable andrelevant to the majority of the population. This makeshuman and resource mobilization for projectimplementation easier.

.. 

Performance Improvement During Monitoring, if a performancedeviation is discovered solutions can be devised. To findappropriate decisions that can be implemented requires the

participation of those people who will put the solution intopractice.

.

Thereforeparticipation inmonitoring can help

improve projectperformance.

.. 

Design of Projects: The information generated duringproject monitoring helps in re-designing projects inthat locality to make them more acceptable.

.

The lessons learned canalso be used in the designof similar projectselsewhere.

.. 

Collection of Information: If manypeople participate in monitoring theyare more likely to come up with moreaccurate information. This is because,information that is omitted by oneparty, can be collected by the other.

.

Each stake holder is putting varyingemphasis on the different aspects of theproject using different methods.Alternatively, one party knowing that the

information they are collecting will beverified, forestalls deliberate wrongreporting.

.. 

Challenges of Participation inMonitoring:.. 

Whereas participation in monitoring has anumber of virtues, it is likely to face anumber of challenges.

.The challenges include: (a) high costs,(b) variations in information, and (c)inaccuracies.

.. 

High Initial Costs: Participation inmonitoring requires manyresources (eg time, transport and performance-related allowances).

.

It is a demanding process that can over-stretchvolunteer spirit at community level and financialresources at district and national levels.Therefore it must be simple and focussed to vitalelements.

.. 

Quantity and Variety of Information: Monitoring requirescollection, documentation andsharing of a wide range of information.

.

This requires many skills that are lacking in thecommunities. It therefore necessitates muchtime and resources for capacity building. It alsorisks wrong reporting.

.. 

Inaccuracy of Information: Some stake holders, from the

community to the national level, may intentionally providewrong information to depict better performance and

. To counteract wrong or 

incorrect reporting needssensitization and

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 23/117

outputs or because of community or project differences.consensus building thatis difficult to attain.

.. 

The advantages of participation inmonitoring are evidently more

than the challenges.

.It is therefore necessary to encourage andsupport participatory monitoring as we devise

means to counteract the challenges.

Levels of Monitoring

Community, District, National, Donor Monitoring methods differ at each level, and complement each other ..

There is no universal vocabulary for varying levels of government andadministration from the community levelto the national level. Terminology variesfrom country to country. I can not,therefore, use a set of terms that can beapplied in many countries, although theprinciples and methods of communityempowerment are universally similar (withminor variations between countries). Sincethese training modules were mainlydeveloped in Uganda, I am using theterminology of Uganda.

.

When Museveni came to power, theyranged from Resistance Council LevelOne (community or village) up toResistance Council Level Five (District).More recently, Uganda reverted to aformer terminology with colonialvestiges: 1 = village, 2 = parish, 3 = sub-county, 4 = county and 5 = district. Theprecise terms are not important here;what is important is that there aremonitoring roles that range from thevillage to the national level. Use

whatever terms are applicable to your situation.

... 

Monitoring should be carried out by allstake holders at all levels. Each level,however, has specific objectives for monitoring, methods and therefore roles.

.

For monitoring to be effective, there isneed for a mechanism of givingfeedback to all people involved at alllevels (community, district, national and donor ).

Monitoring at Community Level:.. 

Community level is where implementation and utilization

of the benefits of the project take place. In most cases itis the village and parish level. At this level, the major 

. The specific objectives

for monitoring at thislevel therefore include,

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 24/117

purpose of monitoring is to improve the implementationand management of projects. The interest of thecommunity as a whole in monitoring school construction,for example, is to ensure that the construction of theschool (an output ) is being done as planned.

(a) ensuring that theprojects are implementedon time, (b) that they areof good quality and (c)that the project inputsare well utilized.

.. Monitoring at this level involves: Identifying acommunity project. This should be identifiedin a participatory manner to reflect thecommunity needs and stimulate people'sinterest in its implementation andmonitoring.

.

If the process of project identificationis not well done and does not reflectcommunity interests, it is likely thatthe communities will not participatein the monitoring of theimplementation activities;

.. 

Identifying the team(s) tospearhead the monitoringof the project in thecommunity.

.

The roles of each team, how they should carry out themonitoring process, the use and sharing of informationgenerated with other groups within and without thecommunity, should be specified and explained;

.. 

Design a work plan  that guidesproject monitoring. The work planshould specify the activities in theorder that they will be executed andthe individuals to execute them.

.

This helps the people monitoring to know theactivities that should be carried out byparticular individuals in a given period of time.If the activities are not carried out, the peoplemonitoring get guidance in coming up withsolution(s);

.. 

Determine the major activities fromthe work plan. Whereas allactivities in the work plan arenecessary and should be

monitored, it is useful to identifythe major activities on the basis of which objectives and indicatorswould be set.

.

For example if the preparatory activities in aschool construction project include,community mobilization, borrowing of hoesfrom the neighboring village, digging of the soiland fetching of water for brick making, themajor activity summarizing all the sub-activitiescould be brick making.

.. 

Determine the indicatorsfor each activity objective.The indicators help theteam monitoring to tell howfar they have gone inachieving the objectives of each activity. In our example, one indicator could be the number of bricks made. and 

.

Compare what is happening with what was plannedshould be done in the process to tell whether theproject is on schedule and as planned. The monitorsshould check at the indicators to measure how far theyhave reached in achieving the objectives. This shouldinvolve looking at the quality of work to ensure that it isgood. The monitoring team may need to involve atechnical person like a local artisan or a technicianfrom the district to ascertain the quality of the project(if it is of a construction).

..

The monitoring team shouldthen agree on how often theyshould visit the project siteas a means of verifying whatis taking place.

.

For a community project, to avoid big deviations fromthe work plan, monitoring visits should be carried outat least once a week. During the project visits, theteam should look at what is happening (observe) andtalk to every body who is involved in the project;

..

For each activity, themonitoring team shouldidentify the objectives.

.For example the objective of brick making as an activityduring the school construction project could be; to maketen thousand bricks by the end of February.

..

Whenever a monitoring visit is carriedout, those monitoring should write down

. The findings from the monitoring visitsshould be discussed with other members

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 25/117

what their findings. They can use a formattached in the annex or agree on anyother reporting format that captures thefindings of the exercise in relation to thework plan.

of the implementation committee. Themonitoring and implementation teamsshould use the information collected todetect and solve the problems facing theproject.

..

The monitoring and implementationteams should store the informationwell and use it for future actions andto inform other stake holders.

.At each site there should be a file in whichcopies of monitoring reports and other documents related to the project are kept.

.. 

Monitoring at District and Sub-CountyLevel:..

The district and sub-countyofficials should get informationfrom the community monitoring(monitoring performance in

relation to turning the inputsinto outputs).

.

They should also monitor the outcome of theproject (eg the effect of school construction onthe enrolment levels). The district should alsomonitor the increase in strength, capacity and

power of the target community to stimulate itsown development.

.. 

The objectives therefore include: supporting theimprovement in project performance andmeasuring the applicability of the way the projectwas designed in relation to communitystrengthening.

.

The methods for monitoringthat can be adopted at thislevel include (a) routinemonitoring and (b) qualitativesupport.

.. 

Routine Monitoring and SupervisorySupport: This requires the DistrictProject Coordinator, Community

Development Assistant, other technicalstaff and politicians at the district andsub-county to visit the project sites toascertain what is happening in relationto what was planned.

.

A copy of the work plan and communitymonitoring reports should be kept in theproject site file. This will help whomever 

wants to compare progress with the workplan and get comments of the monitoringteam to do so without necessarily tracingthe members of the monitoring team whomay not be readily available.

.. 

During routine monitoring,discussions should be made with allthe people involved in theimplementation and monitoring of theproject. Look at the manner in whicheach team performs its duties (as a

means of verifying the increase incommunity capacity).

.

Make and record comments about good andbad elements in the project. Recommendsolutions showing who should undertakethem, with financial, time and the negativeeffects that may accrue to the project if theyare not taken. A copy of the commentsshould be left in the project site file/book

and the other discussed and filed at thedistrict.

.. 

The sub-counties and districts shouldorganize discussions of projectprogress at least once a month.

.

Also file and submit a project progressreport as part of the routine monthlyreporting to the district and national officerespectively.

.. 

The major issues to look at during the district and sub-county routine monitoringinclude:

• Levels of actual community, sub-county, district and donor contributions

(including  funds, materials, time and expertise); 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 26/117

• Timely implementation and quality of projects; 

• Appropriate use and accountability of community and donor resources; 

• Level of community involvement in the project; 

• Commitment and performance of community committees; and  

• Timely use of information generated through the community routinemonitoring. 

.. 

Qualitative Enquiry:The district, inliaison with the sub-county, shouldorganize FocusGroup Discussions,Key InformantInterviews, andCommunity Group

Discussions, withcommunities andother keyinformants at leasttwice a year.

.

These enquiries would help the district to: 

• Verify some of the information collected by thecommunity and district; 

• Get information on issues that are not captured duringthe routine monitoring; 

• Discuss on spot with the communities on possiblesolutions to problems hindering project performance;

and  

• Discuss with the community, learn from them, explaincapacity building issues. 

.. 

These qualitative enquiries should be simple and involve thecommunity members to reduce the costs and enable thecommunity members to learn how to conduct them as ameans of community strengthening. The outputs should beanalyzed in relation to the community and routine districtfindings and should also be used to discuss solutions.

.

Findings should bewell documented andshared at the nationallevel in order to assistnational levelmanagementinformation.

.. 

The major issues during the qualitative enquiries include: 

• Establishing whether the projects were the community priorities (also the appropriateness of the project identification); 

• Community members' knowledge and appreciation of the project methodology, and their willingness to participate and contribute to the project activities; 

• Effectiveness of the community members during project monitoring; • Opinions of community members on quality and use of resources

(accountability); • Skills (eg decision making capacity and negotiation skills), acquired by specific

categories of people in the community during project implementation; and  

• Community knowledge of their rights and obligations. .. 

Before qualitative enquiries, each district and sub-countyshould identify and discuss any management informationgaps to form periodic themes.

.Specific designswould also be agreedupon at this stage.

Monitoring at National and Donor Level:.. 

Monitoring at the national and donor level is to find out if project inputs are well used(desired outputs are being realized ), project design is appropriate, and for learning.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 27/117

.. 

The objectives of monitoring at this level include: 

• To ensure that the inputs for are efficiently and effectively utilized. 

• That the planned activities are being realized; •

To measure the applicability of the methodology to community strengthening;and  

• To draw lessons from the project intervention for future projects in the country and beyond. The lessons will provide the basis for project methodologyreplication.

.. 

The methods for monitoring at this level include: (a) routine monitoring,

(b) action research and qualitative enquiries, and (c) surveys.

.. 

Routine Monitoring: Routine monitoring should bedone on a quarterly basis by project staff and theministry's planning unit to check on the levels of activities and objectives. Since the national level getsinformation about the projects and activities throughmonthly district progress reports, national routinemonitoring should be limited in scope.

.

It should cover aspects that

appear contradictory,problematic, verysatisfactory or unique.These would enable thenational office to provide thenecessary support and drawlessons.

.. 

Action Research and Qualitative Enquiries:The national office should carry out in-depth qualitative enquiries once a year.

.

These should focus on drawinglessons from the project design andimplementation experiences for replication.

.. Therefore, the major issues at this level include: 

• The contribution of community projects on national and donor priorities; • Satisfaction derived by the communities (levels of service and facility

utilization); • Capacity of the community to operate and maintain the services and facilities; 

• Ability of the community members to pay for the services and facilities; 

• Appropriateness of the project methodology in light of national policies; 

• Leadership, authority and confidence within communities; • Capacity building and functioning of Local Governments and District

personnel; • Representation (especially of women) in the community decision making

process; • Replication of experiences in other projects and training institutions; 

• Capacity building of existing individuals and institutions; and  

• The functioning of the monitoring and management information systems. .. 

Surveys: Surveys should also be conducted to gather quantifiable dataand supplement the information generated through other methods. Thesecan be contracted to research institutions such as at universities.

.. 

Monitoring Issues and Procedures at DifferentLevels:

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 28/117

.. 

Monitoring issues and procedures are describedhere for each level. This is to emphasize that thestake holders should spearhead but not exclusivelycarry out all monitoring. In practice, the issues andprocedures of the different stake holders overlap.

Each stake holder should support others inmonitoring responsibilities.

.

Issues mentioned here are notexhaustive but indicate whatshould be done. Each levelshould therefore collectinformation on any other 

issues deemed relevant to theparticular situations.

.. 

These are presented as three tables (1) community level, (2) district

level, and (3) national level, indicating the key issues at each level.

.. 

Community Level:.. 

At the community level the three main actors who have a stake

in the community strengthening intervention are the: 

• CBO Executive or Implementing Committee (CIC) of thecommunity project; 

• Community mobilizers; and  

• Parish Development Committee (PDC). 

.

The following tablelooks at the mainissues of interest,monitoringindicators, meansof observing,frequency, andsuggestedmonitoringprocedures, for each of these threestake holders.

.. 

Stake Holder IssueMonitoringIndicator 

Means of Observing

Freq.MonitoringProcedure

ExecutiveCommittee

TimelyImplementationof Projects

Number of projectactivitiesimplemented intime.

Routineproject visits

WeeklyMembers useroutine monitoringform

Appropriate useof projectresources

No materialsmisused

Routineproject visits.Project qualitychecks

Weekly

Members useroutine monitoringformCheck quality usingthe technician'sguidelines

Proper collection and

storage of projectinformation

Percentage of projects with

project site files;number of reportsin site files

Reviewing the

project sitefiles

Weekly

Members of theproject committee

review the projectsite file, reportsand comments

CommunityMobilizers

Realistic projectimplementationwork plan

Number of projectwork plans withwell sequencedactivities

Compareactivities inthe work planwith how theyareimplemented

Monthly

Mobilizers (1)review sequence of project work planswith a technicalperson, and (2)conduct monthlyproject site visits

Communityparticipation in

projectactivities

Number of  persons

performing their roles

Number of activities.

Amount of resources

Monthly Project site visits;Discussions with

people about their contributions.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 29/117

provided bythecommunity

Dev'tCommittee

Accountabilityof ProjectResources

Percentage of resourcesaccounted for 

Resourceaccountabilityform

QuarterlyPDC members useproject resourceaccountability form

... 

Sub-County and DistrictLevel:.. 

At the district and subdistrict (more than onecommunity) level, themain actors who have astake in the communitystrengtheningintervention are the:

.

• Community Development Assistants (CDAs); 

• Planning Unit; and  

• District Project Coordinator, (DPC) who, if aministry official, is usually a CommunityDevelopment officer  (CDO), or an NGO

equivalent. .. 

The following table looks at the main issues of interest, monitoring indicators, meansof  observing, frequency, and suggested monitoring procedures, for each of these threestake holders.

.. 

Stake Holder 

IssueMonitoringIndicator 

Means of Verification

Freq.MonitoringProcedure

CommunityDevelopmentAssistant

Functioning of mobilizers andcommunity

committees

Number of committeesperforming their 

roles

Review of eachcommittee'sperformance

Twicea year 

CDA during thequalitative enquiriesdetermine theperformance of each

committee

DistrictProjectCoordinator 

and 

PlanningUnit

Identification of projects that fallin the districtplan andnationalpriorities

Number of projects under the district plan

Review of projectidentificationreports. Projectvisits

Twicea year 

The planning unit reviewsthe plans from theparishes, to establish if they fall under the districtplan and national priorityareas

Communityleadersacquisition of community

managementskills

Number of villages usingcommunityparticipationin planning

andimplementingprojects

Review of projectreports. Focusgroupdiscussions andother qualitative

enquirytechniques.

Twicea year 

Planning unit conductsqualitative enquiries tofind out if communitiesare participating in projectactivities. District specificprocedures must be

designed when exercisestake place

.. 

National and Donor Level:.. 

At the national or country level, there are two main stake holders, (1) The ministry or agency that is implementing the intervention or project, and (2) any external national or  international donors that are contributing to the intervention or project.

..

 Stake Issue Monitoring Means of Freq. Monitoring

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 30/117

holder Indicator Verification Procedure

NationalOffice

and

Donors

Communityknowledge of methodology

Proportion of people aware of the methodology.

Surveys, focusgroup,discussions, keyinformantinterviews

AnnuallyAgency or Ministrydesign and conductthe annual studies

Effectiveness of the projectdesign

Percentage of projectoutputs attained.Percentage of designaspects appreciated bythe community.

Review of projectreports, Surveys,Focus GroupDiscussions, KeyInformantInterviews

AnnuallyAgency or Ministrydesign and conductthe annual studies

Adaptation of implementationexperiences byother projectsand institutionsin the country

Proportion of theproject designaspects adapted

National andinternationaldiscussions

Annually

Agency or Ministry conductsmeetings with academicinstitutions and communityprojects to find out themethodological aspects thathave been replicated

Monitoring and Reporting

After the Observations are MadeHow to report the observations and analysis..

While this document focuses on reporting of observations made while monitoring, the

next module, Report Writing, looks more in detail about the writing of reports itself.

.. 

Reporting is a major activity duringproject monitoring. It is the way inwhich information about theprocess and output of activities,and not just the activities, is sharedbetween the stake holders of theproject.

.

In the case of a school construction project,reporting does not end at mentioning thenumber of times the community met to makebricks and build the school walls, but alsomentions the number of bricks and schoolwalls that were constructed plus the processthrough which they were accomplished.

.. 

In community projects, reporting is mainly done through two ways: verbal and written.

.. 

Verbal Reporting:.. 

This is a process where reporting isdone orally. It is the commonest . The community members find it easier andmore effective to communicate to others in

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 31/117

means of reporting. words.

.. 

The advantages of verbal reporting are: 

• The ability for a wider proportion of the community to participate.

Many community members especially in rural areas are illiterate andcannot write. Those that can write find the writing of reports time andresource consuming which makes them reluctant to document all theinformation acquired during project monitoring. 

• Clarity and timely distribution of information. Verbal reporting isalways done immediately after an event. This makes the informationarising out of the process to be relatively valid, reliable and up to-datethan the information that is documented. The people that give thereports, get an opportunity to discuss with the community and getimmediate feedback. This helps in decision making. 

• Low cost. Verbal reporting cuts down significantly the time and other resources spent on reporting. 

.. 

The challenges of verbal reporting include: 

• Wrong reporting. Some community members may deliberatelydisseminate wrong information verbally to protect their interests.Verbal reporting is so tempting because a person reporting knows thatno body will disqualify the reports. In other cases the people givingthe information are not given the time to think through the responses. 

• Storage, replication and consistency: Since during verbal reportinginformation is neither documented nor recorded, it is very difficult tokeep and retrieve it for further use. This information is only kept in theminds of people who participated in the implementation of the project.This therefore makes it difficult to share the information with peoplebeyond the community especially in instances where those peoplewho know the information cannot or are not willing to reveal it. Theinformation collected is also not likely to be consistent especially incases where past information is needed to generate new data. 

.. 

Written Reporting:.. 

During monitoring it is important toreport about the results of activitiesnot just the activities.

.Write what you observe, along withreviewing reports of technical people.

.. 

The advantages of written reports are: 

• They provide reliable information for management purposes (Written reports

can be cross-checked over time with other information to ascertain accuracy ); • They help to provide information from the technical people; and  

• The reports that are written are easy to manage. .. 

The challenges of written reports are: 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 32/117

• Day to day writing during project monitoring activities is always ignored; and  

• Documentation of reports is very costly both in time and money. .. 

See   Levels of Monitoring  for anexplanation of the levels used here.

.Uganda uses: 1 = village, 2 = parish, 3 =sub-county, 4 = county and 5 = district.

Reporting Roles of Key Stake Holders:

 Community level:

.. 

Project Committees: 

• Design and publicize (in liaison withmobilizers) the project implementation work

plan to the, Parish Development Committee,Local Councils and the community; 

• Compile and publicize the monthly projectprogress reports to the Parish DevelopmentCommittee, Local Councils at village andparish level and Community DevelopmentAssistant; and  

• Keep the project site file (including thework plans, monitoring reports and any other specific project information) for eachproject. 

.

Community Mobilizers: 

• Prepare reports aboutvillage level projectidentification processand submit copies tothe Parish DevelopmentCommittee and theCommunityDevelopment Assistant;

• Collect and submitreports about thecommunity and specificindividuals in thecommunity; and  

• Submit reports on alltraining conducted in

the community. .. 

Parish Development Committees: 

• Give an up-date about projects in the parish to thecommunity in local council meeting; 

• Report to community and CDA about resourcesand how they are used in each project; 

• Submit an annual report to the CDA on the mainactors in the community projects. 

.

Local Council One andTwo: 

• Documentminutes of council andexecutivemeetings for their management

decisions anduse by the sub-county, districtand nationalteams. 

.. 

Sub-County and District Level:

.. 

Community Development Assistant: 

• Submits a monthly summary of project progress reportsto the district; 

• Report on status and functioning of communitymobilizers, project committees and parish development

. CommunityDevelopmentOfficer (DistrictCoordinator): 

• Submits a

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 33/117

committees; • Submits a summary of training conducted by mobilizers

and to the mobilizers; 

• Submits a report on the main contributors in thecommunity projects to the district. 

monthlysummary of districtprogressreports tothe nationaloffice. 

.. 

National Office:

.. 

National Coordinator: 

• Submits half year progressreports in the country to thenational steering committee,ministry and donors; 

• Prepares up-dates of projectactivities and outputs andsubmits copies to eachdistrict, who in turn publicizethe report to the sub- countiesand parishes. 

.

• Submits SWOT (Strengths,Weaknesses, Opportunities, Threats)reports twice a year on the strengthand weaknesses of the projectdesign to the ministry and donors.Include bad and goodimplementation experiences. May bepart of the six month report; 

• Compiles and publicizes survey andqualitative enquiry findingswhenever such studies areconducted. 

Evaluation

A Beginners Guide

Dinosaurs would have had a better chance to surviveif they had evaluated climate changes a bit more carefully

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 34/117

Introduction

This document aims to present a user-friendly approach to the process of evaluation.

It is particularly targeted at those involved in programs working towards the

introduction of human rights concepts and values in educational curricula and

teaching practices, who are initiating an evaluation for the first time. It containspractical suggestions on how to effectively organize the evaluation of such programs

in order to learn from the work implemented so far.

Often the demands put upon human rights activists and the urgency to act against

human rights violations means that no time is put aside to systematically evaluate

Human Rights Education (HRE) projects and their effects.

However, an evaluation is a positive activity for several reasons:

• It is a chance for practitioners to test for themselves that their efforts are

working and are worthwhile as well as identifying weaknesses to be remedied.

• It is an essential step towards improving effectiveness.

• It gives the program credibility in the eyes of those directly and indirectly

involved (such as students and funders ) It is essential for good planning and

goal setting

• It can raise morale, motivate people and increase awareness of the importance

of the work being carried out. Or it can help solve internal disputes in an

objective and professional way.

• It allows others engaged in similar program work, in your country or abroad,

to benefit from previous experiences.

What is evaluation?

Evaluation is a way of reflecting on the work that has been done and the results

achieved. A well thought out evaluation will help support and develop further any

program, that is why evaluation should be an integrated component of any project or 

program plan and work implementation.

The process of evaluating is based on evidence (data), which is systematically

collected from those involved in the program by various methods such as surveys and

interviews, and from the analysis of documents and background information. The

analysis and interpretation of this data enables practitioners to evaluate the program

concerned.

For example, it allows you to ask and answer questions like:

• Is the program achieving its goals?

• Does the program have an effect? Is the effect different from the set

goals? If so, why?

• Is the program using its resources (human and financial) effectively?

We have identified three main goals for carrying out an evaluation:

Community impact centred when the goal of the evaluation is to look at the impactthe HRE program is having in the community or the overall society. For example,

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 35/117

looking at a human rights training program for the police and evaluating if such

training is having an impact in the community.

Organization centred when the goal of the evaluation is to look at the life of the

organization. This kind of evaluation would ask questions such as: Is the organization

functioning well? How is the organization perceived by the public or by theauthorities?

Learner centred when the goal of the evaluation is to look at the personal

development of the learner. This means that the evaluation will be assessing if the

learner is achieving knowledge about human rights, and if s/he is also acquiring other 

benefits such self worth, confidence, empowerment and commitment to human rights.

Although it might be helpful to focus on only one of these goals initially, it is

important to be aware that some overlap will take place. For example when evaluating

the community impact of a project you will inevitably have to look at the efficiency of 

the organization concerned.

An evaluation can be carried out by:

• Someone who works or belongs to the organization (internal evaluation  )

• By someone who does not work in the organization (external evaluation  )

• By a mixture of the two - a team of people working for the organization and

outsiders (a combination of external and internal evaluation).

Depending on the circumstances in which the evaluation is to be carried out, the

reasons why you are carrying it out and the resources available, you can choose one of 

these three ways of carrying out an evaluation.

Regardless of the general goal you choose, or if it is an external or internal one,

evaluations should always be:

• action oriented -- intended to lead to better practices and policies. Evaluation

reports should include recommendations for improvement.

• carried out as much as it is possible with a participatory approach -- those

affected by the evaluation should be allowed when relevant to comment on

the scope of the evaluation and the evaluation plan.

• take into account the internal as well as the external factors which may be

influencing the work and its outcomes

Planning an evaluation 

Measurable objectives and evaluation should be included in the plan from the start.

Evaluation should not be an afterthought.

Preparing to implement an evaluation should involve some considerations:

Why?

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 36/117

It is important to have a clear goal for the evaluation. Do we want to evaluate the

impact the project is having on society? or the way the project is functioning? or the

personal development learners are achieving through the project ? In other words

should the evaluation be community impact centred; or organization centred; or 

learner centred?

We need to ask what do we want to achieve with the evaluation? Evaluations should

be of interest and value to those involved in the implementation of the program and

those benefiting from it as well as for those funding the work.

For example, do we want to look at how we are administering/managing the program

or project? This could be because we are having internal problems and we need an

objective look at what these are and how to solve them. Or do we want to analyse the

effects we are having on our beneficiaries? This could be because we are starting to

prepare a new plan for the following years and we need to understand where we are at

and where should we be moving towards. The results of such an evaluation could

help us make our arguments stronger in front of potential funders.

To decide the rationale for the evaluation it is important to refer to the project's plan

and budget. This, if well elaborated, is a key factor in helping us to remember the

rationale behind the goals we are trying to achieve and why we have organized our 

human rights education work the way we have.

What?

Not to lose track of what the rationale for our  evaluation is (the why?) we need to

clearly define which problem or issue is to be evaluated (the what?). It is very

important to set up well defined parameters for the evaluation by being clear on what

is to be evaluated. One of the reasons why we don't evaluate is that we make the task 

too big -- it can be helpful to only focus on one or two elements, particularly as you

are carrying out your first evaluation. We then need to decide what is the best type of 

evaluation for the purpose we have chosen.

For example, if we need to evaluate the effects we are having in the shaping of 

national policy for primary education we need to ensure that the terms of reference of 

the evaluation clearly direct the evaluator in this direction and that the methodology

we are using is adequate to collect the relevant data .

We need to analyse the different components of the program; determine the ones

which are key in the area that has been chosen for evaluation; and then include them

in the terms of reference as the parameters within which the evaluation will take

place.

When?

An evaluation should be timely - by this we mean that an evaluation should take

place at an appropriate moment and that it should have a clear duration. For example,

to carry out an evaluation of the administration of the program in the middle of a

major five day human rights training course would only provoke resentment anddisruption. But we may want to look into how the administration is running when we

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 37/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 38/117

establish what expenses may result from the evaluation.

Methodology

The most common methods for collecting data are:

• researching and analysing background documents and other materials which

are relevant to the project.

• focus groups

• having relevant people fill in a questionnaire

• interviewing relevant people

These four methods can be all used, or combined in different ways depending on the

scope of the evaluation and the resources available. Below there is more information

about questionnaires or carrying out interviews.

A questionnaire or an interviewQuestionnaires and interviews are the most common methods of collecting

information from people. Sometimes all participants are questioned and sometimes

just a sample. There are several reasons why such approaches are used so often when

collecting information for an evaluation:

• they can be very flexible and user-friendly methods when used with care and

imagination.

• they usually put fewer demands on the project.

• the same set of questions can be copied and used as often as needed, so

comparisons can be made over time or between groups of students (do current

learners have the same views as people on the program last year?).

To decide whether or not the list of questions will be used in an interview situation or 

will be distributed and filled in individually through a questionnaire largely depends

on how much time you have available, how in-depth you want the information to be

and whether or not you feel respondents would be comfortable and would respond

honestly in a personal interview.

When deciding to use a questionnaire it is important to consider how the

questionnaires will be gathered back in. Often the response is less than one might

hope for. The key is to make it as easy as possible for people to complete this task for example if people are going to be posting the form back by a certain date make sure

they know the address to send it to, or consider giving them post-paid envelopes.

Another suggestion is to ensure that the respondent is aware of the importance and the

purpose of the questionnaire, and how his /her reply will help.

It is also important to make explicit if the questionnaire is anonymous or if the

respondent should write their name. You may not want them to write their name in,

but you may want to have some data about them. For example, their occupation,

gender, age bracket or place of work.

The advantages of interviews are that it gives the chance to discuss matters in greater depth than in a distributed questionnaire; it is a more suitable forum for tackling

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 39/117

sensitive issues; and it offers a more flexible framework. The main disadvantages of 

opting for interviews are the drain on resources -- of time or of money if the

interviewer has to travel to meet with the respondents -- and the greater complexity of 

the responses to be analysed.

Most interviews are based on a questionnaire consisting of mainly open-endedquestions (see the section below on preparing the questions). In a tightly structured

interview only the questions that appear on the questionnaire are asked by the

interviewer. A semi-structured interview allows the evaluator to ask spontaneous -

and perhaps more probing - follow-up questions in response to the information given

by the interviewee. As in a distributed questionnaire the questions need to be clear 

and unambiguous.

The key to a successful interview lies in the relationship created between the

interviewer and the respondent:

• Try not to make the respondent feel that they are being tested.• Try not to react defensively to any comments the respondent makes.

• Show the respondent that you are really listening by using attentive body-

language, (i.e., facing the speaker; making good eye contact ) and by not

interrupting.

If you can make respondents feel that their opinions are valued and that they can talk 

freely and honestly in the interview, then not only are you gathering much valuable

information for evaluation purposes but also possibly helping to create an interactive

environment that could filter through to the program itself.

For both questionnaires and interviews it is important to allow time to analyse the

responses and to ensure their confidentiality.

Preparing the questions

Once the aim of the evaluation has been decided and the Terms of Reference have

been approved, including the method(s) to gather information, it will be necessary to

decide on the style, content and format of the questions for either the questionnaire or 

the interview.

Types of questions

There are basically seven kinds of questions that can be used in a survey. As a general

rule one should use the type of question which best serves the aims of the evaluation.

Here are some examples:

Open questions get the subject talking. They leave an open field in which to answer.

Examples include

''Tell me about...''

''What happened when...?''

Probing questions are used to fill in the detail. They are questions which can tease out

the really interesting areas and detail of a particular topic.

Examples include:''What exactly did you think of the lesson?''

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 40/117

''What happened at the workshop?''

Reflective questions are useful to obtain further information. They are a repeat of 

something the person answering has said or implied. They are mostly used in

interviews.

Examples include:''You say he over reacted - how?''

''You didn't seem to enjoy that experience - - why was that?''

Reflective questions can also be used to help someone who is upset let off steam so

that rational decisions can then be taken. For example ''You're obviously upset about

this, please tell me more about it?''

Hypothetical questions frequently lead to hypothetical answers. They can be useful in

certain areas such as exploring values, new areas, or problem solving. If you are

searching for precise information this type of question will not help you.

Closed questions require a one or two word answer and are used to establish single,

specific facts.

Examples include:

• I have benefited from participating in this program.

Yes No

• I have benefited from participating in this program. _____ 

the respondent should select their degree of agreement or 

disagreement.

1 = definitely agree; 2 = probably agree; 3 = neither agree nor disagree;

4 = probably disagree; 5 = definitely disagree;

There are different other kinds of ''category scales'' which can be used

For example:

1=frequently; 2=sometimes; 3=almost never;

1=strongly approve; 2=approve; 3=undecided; 4=disapprove;

5=strongly disapprove;

How to choose what type of questions best serve your purpose

A combination of different questions can be used effectively both in a distributed

questionnaire or in an interview situation. However, the design and presentation of 

these questions and how you combine them will depend on certain considerations:

• Use language appropriate to those who will be responding to the

questions. If there are respondents for whom the language of the

questions is not their first language, keep it simple. This also applies to

the choice of question format. A high number of open-ended questions

is not generally suitable for those responding in a second language.

• If the questionnaire is being distributed rather than being used in an

interview situation, think about whether the forms will be posted out,

given to people to take away and sent back to you, or will be filled in

there and then. This will determine what you can cover in the survey.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 41/117

Generally, the ' there and then' type of forms have to be especially

short and easy to fill in.

• Try to find a balance between convenience and thoroughness.

Although closed-ended questions have the advantage of convenience

both for those answering the questions and those collating theinformation afterwards, there are some matters that cannot be covered

by these types of questions. Open-ended questions provide more

freedom for the respondents as well as more insight for the evaluators

but are harder to analyse.

The appropriateness of a strategy will of course depend on what you are trying to

achieve, how much time you have available, and on the relationship of those

communicating. One common technique is called ''The funneling technique''.

It involves:

Starting with broad open questions ''Tell me about.....''Funneling down with an open question ''What exactly did you do...?''

Finishing off with a closed question ''Was it you? Yes or not''

Another technique involves starting with a closed question and expanding out.

1. Starting with an open question:

Advantages

encourages the person to start talking

gives them plenty of scope

can be enlightening to see where they choose to start (or which areas they

avoid)

can help to build confidence and establish rapport

helps to get their view of a subject without biasing it with your views

Disadvantages

May be too open and they don't know where to start

Can encourage a talkative person to ramble

Can open out areas which you didn't want to cover 

2. Starting with a closed question

Advantages

Can be useful to establish facts at the beginning of a conversation

Good when time is very shortMay be helpful if trying to get standard answers to a survey

Disadvantages

Can turn into an interrogation because when they reply with a short answer 

you have to follow it with another question

Gives direction and keeps tight control

Once the list of questions has been written, try it out on a few people before launching

into a full scale evaluation. This will help to give you an idea of the time needed to

answer them, the clarity of the questions and whether all possible responses are

accounted for in the answers for each closed-ended question. At this stage you should

also be able to detect any hidden biases or leading questions.

The final evaluation report

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 42/117

There should be plenty of time to analyse all the information gathered during the

evaluation process and to write the report. The evaluation report should include the

Terms of Reference, a description of the methodology used, the conclusions reached,

including a justification of how these were reached, and a set of recommendations.

It is common practice to circulate amongst relevant people (for example, staff -paid

and unpaid - and members of the board) a draft evaluation report, particularly if the

evaluation has been carried out by an external evaluator. This process is to allow time

for people involved in the project to raise questions about the findings of the

evaluation, and possibly to adjust any potential areas of confusion. Of course this

opportunity to comment on the evaluation report does not mean that people can alter 

or delete negative findings from the report.

The final report when completed should be delivered to those responsible for the

management of the project. They in consultation with those responsible for the

administration and implementation of the project should look into whichrecommendations should be taken forward and how to implement them.

Don't wait until it is too late to start organising an evaluation of your HRE project!

Appendix I

Making sure you have answered all the most important questions:

A final checklist

Why?

• Why do you want to do an evaluation?

What and How?

• What have you decided to evaluate? Social impact, organizational

performance or educational achievement?

• Do you have the resources needed to carry out the evaluation? If not how are

you going to get them?

• How are you going to gather the information? When are you going to know

you have enough information?

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 43/117

Who? 

• What are the interest groups involved in the work you want to evaluate?

• How are you going to assure their involvement and active participation?

• Who will gather the information and who will analyse it?

• Who is responsible for what and who takes the final decisions?

When? 

• Is the schedule or timetable for the evaluation agreed?

After the evaluation: 

• What did you learn?

• What should be improved?

• What are the recommendations coming out of the evaluation?

• Have you all discussed them?• Have you made a timetable for their implementation?

• Are there financial implications?

• Who needs to be informed about all these outcomes?

• Are there new issues that the evaluation has brought to light and that need to

be looked at?

• Have you taken a break or a holiday?

• Have you taken any measures to ensure that evaluation will be an ongoing

process?

Don't start an evaluation if you are not going to finish it or if you are going to ignoreits results and recommendations !!!

Appendix II

This is a sample evaluation designed to give you an example of how terms of 

reference and questionnaire may read -- this is not an evaluation form per se.

EVALUATION OF THE IMPLEMENTATION OF AI'S INTERNATIONAL HRE

STRATEGY FOR HUMAN RIGHTS EDUCATION

INTRODUCTION

The aim of this document is to outline the process for evaluating Amnesty

International's Human Rights Education (HRE)work. The evaluation process will

follow the guidelines for evaluation as set out in Chapter 13 of the Amnesty

International Campaigning Manual published in 1997 (AI index ACT10/02/97).

''But I do not have time to be involved in this evaluation, I have other work to do'' 

If we do not evaluate we cannot improve our work and therefore be more successful

at future HRE activities. The time you spend on inputting to the evaluation now will

save you time in the development of future HRE programs. Evaluation also gives us

an opportunity to learn from each other. For all structures this information is

invaluable. It is time well spent! And we promise to send you the results by ...

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 44/117

BACKGROUND

Aims and Strategies for HRE

In its HRE work AI aims to promote knowledge and understanding of the full

spectrum of human rights, as set out in the UN Universal Declaration of Human

Rights, the International Covenants and other internationally agreed conventions,standards and treaties. Moreover in this work AI puts great emphasis on the

universality, indivisibility and interdependence of all human rights.

LONG TERM GOALS FOR AI's HRE PROGRAMSTo reach a wide and culturally

diverse audience of all age groups, and so to realise our belief that all individuals have

both the right and the responsibility to promote and protect human rights.

1. To develop and implement relevant and effective HRE programs in all countries

with AI structures. To ensure that the necessary resources are available for their full

implementation.

2. To work with relevant NGOs and other organizations to ensure that HRE becomes

an integral part of all educational work whether formal or informal. To encourage the

use of creative methods in teaching to enrich the learning processes.

The key strategies adopted to achieve the above goals were as follows:

Objective 1: To introduce human rights issues into formal educational

and training curricula and teaching practices in schools, universities,

professional training institutions and educational fora.

Objective 2: To develop and expand informal HRE programs.

Objective 3: To facilitate and increase the exchange of HRE expertise,

information and materials within AI and the broader human rights

movement.

Objective 4: to establish regional and sub-regional HRE networks

Objective 5: To actively lobby the relevant IGOs, and International

NGOs to recognise and carry out their responsibilities in regard to

HRE

Objective 6: To secure adequate funding at the international and local

level for AI's HRE program.

Objective 7: To establish effective, comprehensive and systematic

review and evaluation mechanisms for AI's HRE work.

THE EVALUATION PROCESS

The Aim of the HRE Evaluation

This evaluation aims to provide information to the membership about the impact of 

HRE work as well as to provide information to help improve the quality of future

HRE work. The evaluation will focus on the implementation of the International

HRE Strategy and the impact and the results achieved by sections through their HRE

programs.

What will be evaluated?

The Outcomes of the section's HRE work ie : have the aims of the HRE Strategy been

achieved?The process and effectiveness of section's HRE work ie : what plans did you

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 45/117

implement in order to achieve the aims of the International HRE Strategy and how

well were they implemented?

Who should be involved?

1. All Sections and Structures carrying out HRE work.HRE Coordinators

Board members responsible for HRE

Directors

Fundraisers and Campaign Coordinators

Local Group Coordinators

2. The IS Campaigning Program in particular the HRE Team.

Regional Teams and Regional Campaign Coordinators.

Development Teams.

Publications Program.

The IEC member responsible for HRE

The Senior Management Team at the International Secretariat.How will the evaluation be conducted?

1. A questionnaire will be circulated to all involved in the evaluation process

seeking specific input as well as any other information they wish to provide.

2. Wherever possible face to face and telephone interviews will be arranged to

provide further feed back to the evaluation team.

3. External bodies such as educational authorities and local human rights

organisations will also be interviewed to assess the external perception of 

Amnesty International's HRE work.

Who will conduct this Evaluation?

1. The International Executive Committee is the ''client'' for the evaluation.

2. The Standing Committee on Research and Action will carry out the

evaluation - - send the questionnaires, carry out the interviews, analyse the

results and write report, including recommendations.

3. The Director of the Campaigning Program at the International Secretariat and

the HRE Team will also be involved with the evaluation providing input and

guidance on planning and implementation issues.

How will results of the evaluation be communicated?Once responses to the evaluation questionnaires and interviews have been received

they will be drafted into report form and circulated to all participants of the

evaluation. An executive summary will pull together the common themes contained

within the responses to form a set of recommendations to be considered when

planning future HRE work. The final evaluation report will be completed no later 

than 30 June 2001.

Questionnaire to Sections and Structures

Evaluation of HRE work 

1996 - 2001

Please refer to the International HRE Strategy document (POL 32/02/96) and the

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 46/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 47/117

If no what aspects are unclear to you, what would you have done to make them

clearer ?

4. Did you add to or change the International HRE goals and objectives to suit your 

own national/regional environment?

Yes > No >

If yes please give an explanation of any changes.5. The organisational priorities contained in POL 32/02/96 were as follows: Youth &

Students, Law Enforcement, Journalists and Companies.

Were you able to address these priorities? if so how?

6. When and how did you start implementing a HRE project or program?

Please indicate the times of high and low HRE activity since the approval of the

International HRE Strategy in 1996 and the reason for this?

1996 ..................................................................

1997 ..................................................................

1998 ..................................................................1999 ..................................................................

2000 ..................................................................

7. What sectors have you worked with or targetted? Why?

8. Have you worked with non-literate individuals and communities? How have you

done this?

9. Please describe the most successful HRE project work you have implemented,

including the effects that this has had in improving the human rights situation of your 

country/community (use a extra paper if you need to).

10. Please describe the most unsuccessful HRE project work you have implemented,

including the effects that this had in the human rights situation of your 

country/community (use a extra paper if you need to).

11. What are the indicators you have used to measure the impact your project/program

have had? I.e. lower incidents of bulling at school, or lower number of cases of torture

denounced or greater number of articles about human rights in the press or greater 

number of trails of human rights violations or on the results of tests/exams?

12. What if any HRE materials has your section produced? Please specify the format

ei: book, video, etc.

13. Did you pilot this materials?

14 Have you evaluated their use and usefulness?

15. Were the materials produced by the IS helpful?

Yes > No >

16. Did you use them?

Yes > No >

17. What could have made the materials more effective for you?

18. Did you find theHRE Newsletters usefull?Yes > No >

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 48/117

If no in what ways do you think it could be improved?

If yes what do you like the most and why?

19. Did you find the information circulated by the IS about HRE clear and timely?

Yes > No >

If no please explain what information you would have preferred?

20. What if any new campaign approaches or techniques did you develop whilstimplementing HRE work?

21. Do you work/co-operate with any other external organisations or commercial

operations in your HRE work?

Yes > No >

Please can you list the organisations have worked with:

22. In each case please say if these partnerships were useful, what did the partnership

consist of, in what way and how do you think you could involve them in the future?

23. Has HRE work been useful for building awareness about Amnesty's work in your 

country? If so in what way and why do you think this was the case?

24. Did your membership increase as a result of HRE work?

Yes > No >If yes, please quantify?

25. Did you receive any financial assistance from the IS, TFF, World Wide Fund, or 

other sources available within AI?

Yes > No >

If yes, what was the source and in what way was it useful ?

26. Did you receive governmental funds? If yes can you specify the exact source, the

period and work covered by such a grant and the amount received?

27. Have you been successful in fundraising from individuals or companies in your 

country? If yes could you specify the source, the period and work covered by the

grant and also tell us the amount you received?

28. Did you request authorisation from the IEC before receiving government

funding?

Yes > No >

29. Do you think that the HRE goals/aims of set by your section were achieved during

this period?

Yes > No >

If no please give explanation why?

30. What HRE work (if any) are you planning to implement in the next 4 year period?

31. Have you already or will you conduct your own evaluation of the section's HRE

work? If so can the HRE Team at the IS receive a copy? - Thank you!

32. Do you have any other comments or information you would like to add?

Amnesty International, International Secretariat, 1 Easton

The Planning-Evaluation Cycle 

Home • Introduction to Evaluation  • The Planning

-Evaluation Cycle • An Evaluation Culture 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 49/117

Often, evaluation is construed as part of a larger managerial or administrative process.

Sometimes this is referred to as the planning-evaluation cycle. The distinctions

between planning and evaluation are not always clear; this cycle is described in many

different ways with various phases claimed by both planners and evaluators. Usually,

the first stage of such a cycle -- the planning phase -- is designed to elaborate a set of 

potential actions, programs, or technologies, and select the best for implementation.Depending on the organization and the problem being addressed, a planning process

could involve any or all of these stages: the formulation of the problem, issue, or 

concern; the broad conceptualization of the major alternatives that might be

considered; the detailing  of these alternatives and their potential implications; the

evaluation of the alternatives and the selection of the best one; and the

implementation of the selected alternative. Although these stages are traditionally

considered planning, there is a lot of evaluation work involved. Evaluators are trained

in needs assessment, they use methodologies -- like the concept mapping one

presented later -- that help in conceptualization and detailing, and they have the skills

to help assess alternatives and make a choice of the best one.

The evaluation phase also involves a sequence of stages that typically includes: the

formulation of the major objectives, goals, and hypotheses of the program or 

technology; the conceptualization and operationalization of the major components of 

the evaluation -- the program, participants, setting, and measures; the design of the

evaluation, detailing  how these components will be coordinated; the analysis of the

information, both qualitative and quantitative; and the utilization of the evaluation

results.

 

Chapter 1: Why Evaluate Your Program?

You should evaluate your program because an evaluation helps youaccomplish the following:

• Find out what is and is not working in your program• Show your funders and the community what your program

does and how it benefits your participants• Raise additional money for your program by providing

evidence of its effectiveness• Improve your staff's work with participants by identifying

weaknesses as well as strengths• Add to the existing knowledge in the human services field

about what does and does not work in your type of programwith your kinds of participants

Despite these important benefits, program managers often arereluctant to evaluate their programs. Usually this reluctance is dueto concerns stemming from a lack of understanding about theevaluation process.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 50/117

Common concerns about evaluation

Concern #1: Evaluation diverts resources away from theprogram and therefore harms participants. This is a commonconcern in most programs. However, because evaluation helps to

determine what does and does not work in a program, it is actuallybeneficial to program participants. Without an evaluation, you areproviding services with little or no evidence that they actually work!

Concern #2: Evaluation increases the burden for programstaff. Often program staff are responsible for collecting evaluationinformation because they are most familiar with, and have the mostcontact with program participants. Despite this potential forincreased burden, staff can benefit greatly from evaluation becauseit provides information that can help them improve their work withparticipants, learn more about program and participant needs, and

validate their successes. Also, the burden can be decreasedsomewhat by incorporating evaluation activities into ongoingprogram activities.

Concern #3: Evaluation is too complicated. Program managersoften reject the idea of conducting an evaluation because they don'tknow how to do it or whom to ask for help. Although the technicalaspects of evaluation can be complex, the evaluation process itself simply systematizes what most program managers already do on aninformal basis - figure out whether the program's objectives are

being met, which aspects of the program work, and which ones arenot effective. Understanding this general process will help you to bea full partner in the evaluation, even if you seek outside help withthe technical aspects. If you need outside help, Chapter 4 providessome ideas about how and where to get it.

Concern #4: Evaluation may produce negative results and lead to information that will make the program look bad. Anevaluation may reveal problems in accomplishing the work of theprogram as well as successes. It is important to understand thatboth types of information are significant. The discovery of problems

should not be viewed as evidence of program failure, but rather asan opportunity to learn and improve the program. Informationabout both problems and successes not only helps your program,but also helps other programs learn and improve.

Concern #5: Evaluation is just another form of monitoring.Program managers and staff often view program evaluation as away for funders to monitor programs to find out whether staff aredoing what they are supposed to be doing. Program evaluation,however, is not the same as monitoring. Sometimes the informationcollected to monitor a program overlaps with information needed foran evaluation, but the two processes ask very different questions.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 51/117

Concern #6: Evaluation requires setting performancestandards, and this is too difficult. Many program managersbelieve that an evaluation requires setting performance standards,such as specifying the percentage of participants who willdemonstrate changes or exhibit particular behaviors. Program staff 

worry that if these performance standards are not met, their projectwill be judged a failure.

This concern is somewhat justified because often funders willrequire setting such standards. However, performance standardscan only be set if there is extensive evaluation information on aparticular program in a variety of settings. Without this information,performance standards are completely arbitrary and meaningless.The type of evaluation discussed in this manual is not designed toassess whether particular performance standards are attained

because most programs do not have sufficient information toestablish these standards in any meaningful way. Instead, it willassess whether there has been significant change in the knowledge,attitudes, and/or behaviors of a program's participant population ingeneral and whether particular characteristics of the program or theparticipants are more or less likely to promote change.

Guidelines for conducting a successful evaluation

You can maximize the benefits that evaluation offers by following afew basic guidelines in preparing for and conducting your

evaluation.

Invest heavily in planning. Invest both time and effort indeciding what you want to learn from your evaluation. This is thesingle most important step you will take in this process. Considerwhat you would like to discover about your program and its impacton participants, and use this information to guide your evaluationplanning.

Integrate the evaluation into ongoing activities of theprogram. Program managers often view evaluation as something

that an outsider "does to" a program after it is over, or as anactivity "tacked on" merely to please funders. Unfortunately, manyprograms are evaluated in this way. This approach greatly limits thebenefits that program managers and staff can gain from anevaluation. Planning the evaluation should begin at the same timeas planning the program so that you can use evaluation feedback toinform program operations.

Participate in the evaluation and show program staff that you think it is important. An evaluation needs the participation of the program manager to succeed. Even if an outside evaluator ishired to conduct the evaluation, program managers must be fullpartners in the evaluation process. An outside evaluator cannot do

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 52/117

it alone. You must teach the evaluator about your program, yourparticipants, and your objectives. Also, staff will value theevaluation if you, the program manager, value it yourself. Talkabout it with staff individually and in meetings. If you hire anoutside evaluator to conduct the evaluation, be sure that this

individual attends staff meetings and gives presentations on thestatus of the evaluation. Your involvement will encourage a sense of ownership and responsibility for the evaluation among all programstaff.

Involve as many of the program staff as much as possibleand as early as possible. Project staff have a considerable stakein the success of the evaluation, and involving them early on in theprocess will enhance the evaluation's effectiveness. Staff will havequestions and issues that the evaluation can address, and are

usually pleased when the evaluation validates their own hunchesabout what does and does not work in the program. Because of their experiences and expertise, program staff can ensure that theevaluation questions, design, and methodology are appropriate forthe program's participants. Furthermore, early involvement of staff will promote their willingness to participate in data collection andother evaluation-related tasks.

Be realistic about the burden on you and your staff.Evaluations are work. Even if your evaluation calls for an outsideevaluator to do most of the data collection, it still takes time to

arrange for the evaluator to have access to records, administerquestionnaires, or conduct interviews. It is common for bothagencies and evaluators to underestimate how much additionaleffort this involves. When program managers and staff brainstormabout all of the questions they want answered, they often produce avery long list. This process can result in an evaluation that is toocomplicated. Focus on the key questions that assess your program'sgeneral effectiveness.

Be aware of the ethical and cultural issues in an evaluation.

This guideline is very important. When you are evaluating aprogram that provides services or training, you must alwaysconsider your responsibilities to the participants and thecommunity. You must ensure that the evaluation is relevant to andrespectful of the cultural backgrounds and individuality of participants. Evaluation instruments and methods of data collectionmust be culturally sensitive and appropriate for your participants.Participants must be informed that they are taking part in anevaluation and that they have the right to refuse to participate inthis activity without jeopardizing their participation in the program.

Finally, you must ensure that confidentiality of participantinformation will be maintained.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 53/117

About this manual

This manual is designed to help you follow these guidelines whileplanning and implementing a program evaluation. Each of thechapters addresses specific steps in the evaluation process and

provides guidance on how to tailor an evaluation to your program'sneeds. (Reminder: The ACYF bureau companion handbooks providea discussion of evaluation issues that are specific to the type of program you manage.)

The manual is not intended to turn you into a professional evaluatoror to suggest that evaluation is a simple process that anyone canperform. Rather, it is meant to provide information to help youunderstand each step of the evaluation process so that you canparticipate fully in the evaluation- whether you hire an outsideevaluator or decide to do one with assistance from in-house agency

staff and resources.

Chapter 2: What is program evaluation?Program managers and staff frequently informally assess theirprogram's effectiveness: Are participants benefiting from theprogram? Are there sufficient numbers of participants? Are thestrategies for recruiting participants working? Are participantssatisfied with the services or training? Do staff have the necessaryskills to provide the services or training? These are all questionsthat program managers and staff ask and answer on a routinebasis.Evaluation addresses these same questions, but uses a systematic method for collecting, analyzing, and using information to answer basic questions about a program - and to ensure that those answersare supported by evidence. This does not mean that conducting anevaluation requires no technical knowledge or experience - but italso does not mean that evaluation is beyond the understanding of program managers and staff.

What are the basic questions an evaluation can answer?

There are many different types of program evaluations, manydifferent terms to describe them, and many questions that they cananswer. You may have heard the terms formative evaluation,summative evaluation, process evaluation, outcome evaluation,cost-effectiveness evaluation, and cost-benefit evaluation.Definitions of these terms and others and selected resources formore information on various types of program evaluations areprovided in the appendix.

You may have also heard the terms "qualitative" and "quantitative"

used to describe an evaluation. However, these terms, which aredefined in the glossary, refer to the types of information or data

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 54/117

that are collected during the evaluation and not to the type of evaluation itself. For example, an outcome evaluation may involvecollecting both quantitative and qualitative information aboutparticipant outcomes.

This manual is designed to avoid the confusion that often resultsfrom the use of so many terms to describe an evaluation. Instead,all of the terms used here are directly related to answeringevaluation questions derived from a program's objectives.

There are two types of program objectives - programimplementation objectives and participant outcomeobjectives. Program implementation objectives refer to what youplan to do in your program, how you plan to do it, and who youwant to reach. They include the services or training you plan toimplement, the characteristics of the participant population, thenumber of people you plan to reach, the staffing arrangements andstaff training, and the strategies for recruiting participants.Evaluating program implementation objectives is often referred toas a process evaluation. However, because there are many types of process evaluations, this manual will use the term implementationevaluation.

Participant outcome objectives describe what you expect to happento your participants as a result of your program, with the term"participants" referring to agencies, communities, and organizations

as well as individuals. Your expectations about how your programwill change participants' knowledge, attitudes, behaviors, orawareness are your participant outcome objectives. Evaluating aprogram's success in attaining its expectations for participants isoften called an outcome evaluation.

An evaluation can be used to determine whether you have beensuccessful in attaining both types of objectives, by answering thefollowing questions:

Has the program been successful in attaining the anticipated

implementation objectives? (Are you implementing the services ortraining that you initially planned to implement? Are you reachingthe intended target population? Are you reaching the intendednumber of participants? Are you developing the plannedcollaborative relationships?)

Has the program been successful in attaining the anticipatedparticipant outcome objectives? (Are participants exhibiting theexpected changes in knowledge, attitudes, behaviors, orawareness?)

A comprehensive evaluation must answer both key questions. Youmay be successful in attaining your implementation objectives, but

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 55/117

if you do not have information about participant outcomes, you willnot know whether your program is worthwhile. Similarly, you maybe successful in changing participants' knowledge, attitudes, orbehaviors; but if you do not have information about your program'simplementation, you will be unable to identify the parts of your

program that contribute to these changes.

These evaluation questions should be answered while a program isin operation, not after the program is over. This approach will allowyou and your staff to identify problems and make necessarychanges while the program is still operational. It will also ensurethat program participants are available to provide information forthe evaluation.

What is involved in conducting an evaluation?

The term "systematic" in the definition of evaluation indicates that itrequires a structured and consistent method of collecting andanalyzing information about your program. You can ensure thatyour evaluation is conducted in a systematic manner by following afew basic steps.

Step 1: Assemble an evaluation team. Planning and executing anevaluation should be a team effort. Even if you hire an outsideevaluator or consultant to help, you and members of your staff must be full partners in the evaluation effort. Chapter 3 discussesvarious evaluation team options. If you plan to hire an outsideevaluator or an evaluation consultant, Chapter 4 providesinformation on hiring procedures and managing an evaluation thatinvolves an outside professional.

Step 2: Prepare for the evaluation. Before you begin, you will needto build a strong foundation. This planning phase includes decidingwhat to evaluate, building a program model, stating your objectivesin measurable terms, and identifying the context for the evaluation.The more attention you give to planning the evaluation, the moreeffective it will be. Chapter 5 will help you prepare for your

evaluation.

Step 3: Develop an evaluation plan. An evaluation plan is ablueprint or a map for an evaluation. It details the design and themethods that will be used to conduct the evaluation and analyze thefindings. You should not implement an evaluation until you havecompleted an evaluation plan. Information on what to include in aplan is provided in Chapter 6.

Step 4: Collect evaluation information. Once you complete anevaluation plan, you are ready to begin collecting information. This

task will require selecting or developing information collectionprocedures and instruments. This process is discussed in Chapter 7.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 56/117

Step 5: Analyze your evaluation information. After evaluationinformation is collected, it must be organized in a way that allowsyou to analyze it. Information analysis should be conducted atvarious times during the course of the evaluation to allow you andyour staff to obtain ongoing feedback about the program. This

feedback will either validate what you are doing or identify areaswhere changes may be needed. Chapter 8 discusses the analysisprocess.

Step 6: Prepare the evaluation report. The evaluation report shouldbe a comprehensive document that describes the program andprovides the results of the information analysis. The report shouldalso include an interpretation of the results for understandingprogram effectiveness. Chapter 9 is designed to assist you inpreparing an evaluation report.

What will an evaluation cost?

Program managers are often concerned about the cost of anevaluation. This is a valid concern. Evaluations do require money.Many program managers and staff believe that it is unethical to useprogram or agency financial resources for an evaluation, becauseavailable funds should be spent on serving participants. However, itis more accurate to view money spent on evaluation as aninvestment in your program and in your participants, rather than asa diversion of funds available for participants. Evaluation is essential

if you want to know whether your program is benefitingparticipants.

Unfortunately, it is not possible to specify in this manual exactlyhow much money you will need to conduct your evaluation. Theamount of money needed depends on a variety of factors, includingwhat aspects of your program you decide to evaluate, the size of the program (that is, the number of staff members, participants,components, and services), the number of outcomes that you wantto assess, who conducts the evaluation, and your agency's availableevaluation-related resources. Costs also vary in accord with

economic differences in communities and geographic locations.

Sometimes funders will establish a specific amount of grant moneyto be set aside for an evaluation. The amount usually ranges from15 to 20 percent of the total funds allocated for the program. If theamount of money to be set aside for an evaluation is not specifiedby a funding agency, you may want to talk to other programmanagers in your community who have conducted evaluations.They may be able to tell you how much their evaluations cost andwhether they were satisfied with what they got for their money.

Although a dollar amount cannot be specified, it is possible todescribe the kinds of information you can obtain from evaluations at

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 57/117

different cost levels. Think of the process of building a house. If youspend a small amount of money, you can build the foundation forthe house. Additional money will be required to frame the houseand still more money will be needed to put on the roof. To finish theinside of the house so that it is inhabitable will require even more

money.

Evaluation is similar. Some general guidelines follow on what youmay be able to get at different evaluation cost levels.

Lowest cost evaluations. If you spend only a minimal amount of money, you will be able to obtain numerical counts of participants,services, or products and information about the characteristics of participants. You also may be able to find out how satisfiedparticipants were with the services or the training. But this is onlythe foundation for an evaluation. This information will not tell youwhether you have been successful in attaining your participantoutcome objectives. Also, at this cost level you will not have in-depth information about program implementation and operations tounderstand whether your program was implemented as intendedand, if not, what changes were made and why they were made.

Low-moderate cost evaluations. If you increase your evaluationbudget slightly, you will be able to assess whether there has been achange in your participants' knowledge, attitudes, or behaviors, andalso collect in-depth information about your program's

implementation. However, this is only the framework of anevaluation. At this cost level, you may not be able to attributeparticipant changes specifically to your program because you willnot have similar information on a comparison or control group.

Moderate-high cost evaluations. Adding more money to yourevaluation budget will allow you to use a comparison or controlgroup, and therefore be able to attribute any changes inparticipants to the program itself. At this cost level, however, yourinformation on participant outcomes may be limited to short-termchanges-those that occurred during or immediately afterparticipation in the program.

Highest cost evaluations. At the highest cost level, you will be ableto obtain all of the information available from the other cost optionsas well as longer term outcome information on programparticipants. The high cost of this type of evaluation is due to thenecessity of tracking or contacting program participants after theyhave left the program. Although follow up activities often areexpensive, longer term outcome information is important because itassesses whether the changes in knowledge, attitudes, or behaviors

that your participants experienced initially are maintained overtime.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 58/117

Basically, as you increase your budget for an evaluation, you gain acorresponding increase in knowledge about your success inattaining your program objectives. In many situations, the lowestcost evaluations may not be worth the expense, and, to be realistic,the highest cost evaluations may be beyond the scope of most

agencies' financial resources. As a general rule, the more moneyyou are willing to invest in an evaluation, the more useful theinformation that you will obtain about your program's effectivenesswill be, and the more useful these results will be in helping youadvocate for your program.

Chapter 3: Who Should Conduct Your Evaluation?One decision that must be made before you begin your evaluation iswho will conduct it. Evaluation is best thought of as a team effort.

Although one individual heads the team and has primaryresponsibility for the project, this person will need assistance andcooperation from others. Again, think of building a house. You mayhire a contractor to build your house, but you would not expect thisprofessional to do the job alone. You know that to build your housethe contractor will need guidance from you and assistance from avariety of technical experts including an architect, electrician,plumber, carpenter, roofer, and mechanical engineer.

Similarly, in conducting an evaluation, the team leader will needassistance from a variety of individuals in determining the focus and

design of the evaluation, developing the evaluation plan andsampling plan (if necessary), constructing data collectioninstruments, collecting the evaluation data, analyzing andinterpreting the data, and preparing the final report.

What are some possible types of evaluation teams?

There are many types of evaluation teams that you could assemble.Three possible options for evaluation teams follow:

• An outside evaluator (which may be an individual, research

institute, or consulting firm) who serves as the team leaderand is supported by in-house staff (Team 1).

• An in-house evaluator who serves as the team leader and issupported by program staff and an outside consultant (Team2).

• An in-house evaluator who serves as the team leader and issupported by program staff (Team 3).

Whatever team option you select, you must make sure that you,the program manager, are part of the team. Even if your role is

limited to one of overall evaluation management, you mustparticipate in all phases of the evaluation effort.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 59/117

Team 1 option: An outside evaluator with support from program

staff 

Possible advantages:

• Because outside evaluators do not have a stake in theevaluation's findings, the results may be perceived by currentor potential funders as more objective.

• Outside evaluators may have greater expertise andknowledge than agency staff about the technical aspectsinvolved in conducting an evaluation.

• Outside evaluators may offer a new perspective to programoperations

• The evaluation may be conducted more efficiently if theevaluator is experienced.

Possible disadvantages:

• Hiring an outside evaluator can be expensive.• Outside evaluators may not have an adequate understanding

of the issues relevant to your program or target population.

Selecting this team does not mean that you or your staff need notbe involved in the evaluation. You and other staff members musteducate the evaluator about the program, participants, and

community. Other staff or advisory board members must also beinvolved in planning the evaluation to ensure that it addresses yourprogram's objectives and is appropriate for your program'sparticipants.

When deciding on your option, keep in mind that although hiring anoutside evaluator to conduct an evaluation may appear to beexpensive, ultimately it may be less expensive than channeling staff resources into an evaluation that is not correctly designed orimplemented.

Team 2 option: In-house evaluation team leader with support from

program staff and an outside consultant

Possible advantages:

• An evaluation team headed by an in-house staff member maybe less expensive than hiring an outside evaluator (this is notalways true).

• The use of an agency staff member as a team leader mayincrease the likelihood that the evaluation will be consistent

with program objectives.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 60/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 61/117

How can you decide what team is best for you?

Before you decide on the best team to assemble, you will need toconsider two important issues.

Your program's funding requirements. Often a funding agency

requires that you hire an outside evaluator to conduct yourevaluation. This type of evaluator is often referred to as a third-party evaluator and is someone who is not affiliated with youragency in any way - someone with evaluation experience who willbe objective when evaluating your program.

Your program's resources and capabilities. You can assembledifferent types of teams depending on your agency's resources andhow you will use the findings. To determine what internal resourcesare available, examine your staff's skills and experience in planning

an evaluation, designing data collection procedures andinstruments, and collecting and analyzing data and information.

Also, examine the information you already have available throughprogram activities. If, for example, you collect and reviewinformation from the Runaway and Homeless Youth ManagementInformation System or the Head Start Program Information Report(or any other organized participant database or informationsystem), you may be able to use this information as evaluationdata.

If you conduct entrance and exit interviews of participants orcomplete paperwork or logs on participants' progress in theprogram, this information may also be used as part of anevaluation.

The checklist on the following page can help you decide what typeof team you may need. Answer the questions based on what youknow about your resources.

Whatever team you select, remember that you and your staff needto work with the evaluation team and be involved in all evaluation

planning and activities. Your knowledge and experience workingwith program participants and the community are essential for anevaluation that will benefit the program, program participants,community, and funders.

Resources for Appropriate Team Selection(check one)

Yes No

1. Does your agency or program have fundsdesignated for evaluation purposes?

 

2. Have you successfully conducted previous

evaluations of similar programs,components, or services?

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 62/117

3. Are existing program practices andinformation collection forms useful forevaluation purposes?

4. Can you collect evaluation information as

part of your regular program operations (atintake, termination)?

5. Are there agency staff who have trainingand experience in evaluation-related tasks?

6. Are there advisory board members whohave training and experience in evaluation-related tasks?

The checklist above can help you select your evaluation team in the

following ways:

If your answer to all the resource questions is "no," you may wantto consider postponing your evaluation until you can obtain funds tohire an outside evaluator, at least on a consultancy basis. You mayalso want to consider budgeting funds for evaluation purposes inyour future program planning efforts.

If your answer to question 1 is "yes," but you answer "no" to allother questions, you will need maximum assistance in conductingyour evaluation and Team 1 (an outside evaluator with in-house

support) is probably your best choice.

If you answer "no" to question 1, but "yes" to most of the otherresource questions, then Team 3 (in-house staff only) may be anappropriate choice for you. Keep in mind, however, that if you planto use evaluation findings to seek program funding, you may wantto consider using the Team 2 option (in-house evaluation team withoutside consultant) instead and trying to obtain evaluation fundsfrom other areas of your agency's budget.

If your answer to question 1 is "yes" and the remainder of your

answers are mixed (some "yes" and some "no") then either theTeam 1 or Team 2 option should be effective.

The next chapter provides advice on how to locate, select, hire, andmanage an outside evaluator or consultant. This information will beparticularly helpful in assembling Teams 1 or 2. If you plan toconduct the evaluation using the Team 3 option, Chapter 4 may stillbe useful, because it provides suggestions on locating resourcesthat may assist you in your evaluation efforts.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 63/117

Chapter 4: How Do You Hire and Manage an Outside Evaluator?Careful selection of an outside evaluator can mean the differencebetween a positive and a negative experience. You will experiencethe maximum benefits from an evaluation if you hire an evaluatorwho is willing to work with you and your staff to help you betterunderstand your program, learn what works, and discover whatprogram components may need refining. If you build a goodrelationship with your evaluator you can work together to ensurethat the evaluation remains on track and provides the informationyou and your funding agency want.

Finding an outside evaluator

There are four basic steps for finding an evaluator. These steps aresimilar to any you would use to recruit and hire new program staff.Public agencies may need to use a somewhat different process andinvolve other divisions of the agency. If you are managing aprogram in a public agency, check with your procurementdepartment for information on regulations for hiring outsideevaluators or consultants.

Step 1: Develop a job description. The first step in the hiringprocess is to develop a job description that lists the materials,services, and products to be provided by the evaluator. Indeveloping your job description, you will need to know the types of evaluation activities you want this person to perform and the time

lines involved. Evaluator responsibilities can involve developing anevaluation plan, providing progress reports, developing datacollection instruments and forms, collecting and analyzing data, andwriting reports. If you think you need assistance in developing a jobdescription, ask another agency that has experience in hiringoutside evaluators for help. Advisory board members may also beable to assist with this task.

Step 2: Locate sources for evaluators. Potential sources useful forfinding an evaluator include the following:

Other agencies that have used outside evaluators. Agencies in yourcommunity that are like yours are a good source of informationabout potential outside evaluators. These agencies may be able torecommend a good evaluator, suggest methods of advertising, andprovide other useful information. This is one of the best ways to findan evaluator who understands your program and is sensitive to thecommunity you serve.

Evaluation divisions of State or local agencies. Most State or localgovernment agencies have planning and evaluation departments.You may be able to use individuals from these agencies to workwith you on your evaluation. Some evaluation divisions are able to

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 64/117

offer their services at no cost as an "in-kind" service. If they areunable to respond to a request for proposal or provide you with in-kind services, staff members from these divisions may be able todirect you toward other organizations that are interested inconducting outside evaluations.

Local colleges and universities. Departments of sociology,psychology, social work/social welfare, education, public health, andpublic administration, and university-based research centers arepossible sources within colleges and universities. Well-knownresearchers affiliated with these institutions may be readilyidentifiable. If they cannot personally assist you, they may be ableto refer you to other individuals interested in performing localprogram evaluations.

Technical assistance providers. Some Federal grant programsinclude a national or local technical assistance provider. If youragency is participating in this kind of grant program, assistance inidentifying and selecting an evaluator is an appropriate technicalassistance request.

The public library. Reference librarians may be able to direct you tonew sources. They can help identify local research firms and may beable to provide you with conference proceedings that list programevaluators who were presenters.

Research institutes and consulting firms. Many experiencedevaluators are part of research institutes and consulting firms. Theyare sometimes listed in the yellow pages under "Research" or"Marketing Research." They also can be located by contacting yourState human services departments to get a listing of the firms thathave bid on recent contracts for evaluations of State programs.

National advocacy groups and local foundations, such as The UnitedWay, American Public Welfare Association, Child Welfare League of America, and the Urban League. The staff and board members of these organizations may be able to provide you with names of local

evaluators. They may also be able to offer insight on evaluationsthat were done well or evaluators especially suited to your needs.

Professional associations, such as the American EvaluationAssociation, American Sociological Association, and the Society forResearch on Child Development. Many evaluators belong to theAmerican Evaluation Association. These organizations can provideyou with a list of members in your area for a fee and may have tipson how you should advertise to attract an evaluator that best meetsyour needs. Additional information on these organizations isprovided in the appendix.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 65/117

Step 3: Advertise and solicit applications. After you have developeda job description, identified possible sources for evaluators, andfound ways to advertise the position, you are ready to post anadvertisement to get applications. Advertising in the local paper,posting the position at a local college or university, or working with

your local government's human resource department (if you are apublic agency) are possible ways of soliciting applications. Agencynewsletters, local and national meetings, and professional journalsare additional sources where you can post your advertisement.

It is wise to advertise as widely as possible, particularly if you are ina small community or are undertaking an evaluation for the firsttime. Several advertising sources will ensure that you receivemultiple responses. You should build in as much time as possiblebetween when you post the position and when you plan to review

applications.

If you have sufficient time, you may want to consider a two-stepprocess for applications. The position would still be advertised, butyou would send evaluators who respond to your advertisementmore detailed information about your evaluation requirements andrequest a description of their approach. For example, you couldsend potential evaluators a brief description of the program and theevaluation questions you want to answer, along with a descriptionof the community you serve. This would give them an opportunityto propose a plan that more closely corresponds to your program

needs.

Step 4: Review applications and interview potential candidates. Inreviewing applications, consider the candidate's writing style, typeof evaluation plan proposed, language (jargon free), experienceworking with your type of program and staff, familiarity with thesubject area of your program, experience conducting similarevaluations, and proposed costs.

After you have narrowed your selection to two or three candidates,you are ready to schedule an in-person interview. This interview willgive you the opportunity to determine whether you and theevaluator are compatible. As you do for other job applicants, youwill need to check references from other programs that worked withyour candidate.

What to do when you have trouble hiring an evaluator

Despite your best efforts, you may encounter difficulties in hiring anoutside evaluator, including the following:

Few or no responses to your advertisement. Many programs,

particularly ones in isolated areas, have struggled to obtain even afew responses to their advertisements. Check with your Federal

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 66/117

Project Officer to find out whether he or she can offer yousuggestions, consult with other programs in your community, andcheck with your local State or county social service agency to obtainadvice. Your advisory board may also be useful in identifyingpotential evaluators. Another source may be an organization that

offers technical assistance to programs similar to yours.

None of the applicants is compatible with program philosophy andstaff. If applicants do not match program needs, you may find ithelpful to network with other programs and agencies in your Stateto learn about evaluators that agencies like yours have used. Acompatible philosophy and approach is most important — tradeoffswith proximity to the evaluator may need to be made to find theright evaluator.

The outside evaluator's proposed costs are higher than yourbudgeted amount. In this instance, you will need to generateadditional funds for the evaluation or negotiate with your evaluatorto donate some of their services (in-kind services).

Another option is to negotiate with a university professor tosupervise advanced degree students to conduct some of theevaluation activities. Information about participants and programs isa valuable resource, providing confidentiality is respected. Forexample, you can allow a university professor to have access toprogram information and possibly to other evaluation records in

exchange for evaluation services such as instrument developmentor data analysis.

Managing an evaluation headed by an outside evaluator

Often, when the decision is made to hire an outside evaluator,program managers and staff believe that the evaluation is "out of their hands." This is not true. An outside evaluator cannot do thejob effectively without the cooperation and assistance of programmanagers and staff.

An evaluation is like any activity taking place within your agency —it needs to be managed. Program managers must manage theevaluation just as program operations are managed. What wouldhappen if your staff stopped interviewing new participants? Howlong would it be before you knew this had happened? How longwould it be before you took action? How involved would you be infinding a solution? An evaluation needs to be treated with the samelevel of priority.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 67/117

Creating a contract

A major step in managing an evaluation is the development of acontract with your outside evaluator. It is important that yourcontract include the following:

Who "owns" the evaluation information. It is important to specifywho has ownership and to whom the information can be given.Release of information to outside parties should always be clearedwith appropriate agency staff.

Any plans for publishing the evaluation results should be discussedand cleared before articles are written and submitted forpublication. It is important to review publication restrictions fromthe funding agency. In some instances, the funding agency mayhave requirements about the use of data and the release of reports.

Who will perform evaluation tasks. The contract should clarify whois to perform the evaluation tasks and the level of contact betweenthe evaluator and the program. Some program managers havefound that outside evaluators, after they are hired, delegate manyof their responsibilities to less experienced staff and have littlecontact with the program managers or staff. To some extent, acontract can protect your program from this type of situation.

If this problem occurs even after specification of tasks, you maywant to talk with the senior evaluator you originally hired to offer

the option of renegotiating his or her role. The resolution should bemutually agreeable to program staff and the evaluator and notcompromise the integrity of the evaluation or program. The contractshould specify the responsibilities of program staff as well as theevaluator. These responsibilities may vary depending on thestructure of your evaluation and the amount of money you haveavailable. The exhibits at the end of this chapter provide someguidelines on roles and responsibilities.

Your expectations about the contact between the evaluator andprogram staff. It is very important for an outside evaluator to keepprogram staff informed about the status of the evaluation and tointegrate the evaluation into ongoing program operations. Failure todo this shortchanges program staff and denies the program anopportunity to make important changes on an ongoing basis. Thecontract could specify attendance at staff meetings and ongoingreporting requirements. Setting up regular meetings, invitingevaluators to program events and staff meetings, and requiringperiodic reports may help solidify the relationship between theprogram and the evaluation. Other approaches that may helpinclude asking a more senior agency staff member to become

involved with the evaluation process or withholding payment if theevaluator fails to perform assigned tasks.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 68/117

What to do if problems arise

Even with the best contract, problems can arise during the course of the evaluation process. These problems include the following:

Evaluation approaches differ (the program and evaluator do not see

eye-to-eye). Try to reach a common ground where bothprogrammatic and evaluation constraints and needs are met. If many reasonable attempts to resolve differences have been triedand severe conflicts still remain that could jeopardize the programor the evaluation, program staff should consider terminating theevaluation contract. This decision should be weighed carefully anddiscussed with your funder, as a new evaluator will need to berecruited and brought up to speed midstream. In some situations,finding a new evaluator may be the best option. Before making thisdecision, however, you will need to discuss this with your program

funders, particularly if they are providing financial support for theevaluation.

Evaluation of the program requires analysis skills outside youroriginal plan. You may find that your evaluator is in agreement withyour assessment and is willing to add another person to theevaluation team who has expertise and skills needed to undertakeadditional or different analyses. Many times additional expertise canbe added to the evaluation team by using a few hours of aconsultant's time. Programmers, statisticians, and the like can

augment the evaluation team without fundamentally changing theevaluation team's structure.

The evaluator leaves, terminates the contract, or does not meetcontractual requirements. If the evaluator leaves the area orterminates the contract, you will most likely be faced with recruitinga new one. In some instances, programs have successfullymaintained their ties to evaluators who have left the area, but thisis often difficult. When your evaluator does not meet contractualrequirements and efforts to resolve the dispute have failed, publicagencies should turn the case over to their procurement office and

private agencies should seek legal counsel.

The evaluator is not culturally competent or does not have anyexperience working with your community and the participants. It isnot always possible to locate an evaluator with both experience inthe type of evaluation that you need and experience working withspecific groups and subgroups in the community. If your evaluatordoes not have experience working with the particular group reachedby the program, you must educate this person about the culture (orcultures) of the participants' community and how it might affect theevaluation design, instruments, and procedures. The evaluator mayneed to conduct focus groups or interviews with community

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 69/117

members to make sure that evaluation questions and activities areboth understood by and respectful of community members.

You are not happy with the evaluator's findings. Sometimesprogram managers and staff discover that the evaluator's findings

are not consistent with their impressions of the program'seffectiveness with participants. Program staff believes thatparticipants are demonstrating the expected changes in behavior,knowledge, or attitudes, but the evaluation results do not indicatethis. In this situation, you may want to work with your evaluator tomake sure the instruments being used are measuring the changesyou have been observing in the program participants. Also,remember that your evaluator will continue to need input fromprogram staff in interpreting evaluation findings.

You may also want your evaluator to assess whether some of yourparticipants are changing and whether there are any commoncharacteristics shared by participants that are or are notdemonstrating changes. However, be prepared to accept findingsthat may not support your perceptions. Not every program will workthe way it was intended to, and you may need to make someprogram changes based on your findings.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 70/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 71/117

Potential Responsibilities of the Program Manager

• Educate the outside evaluator about the program's operationsand objectives, characteristics of the participant population,

and the benefits that program staff expects from theevaluation. This may involve alerting evaluators to sensitivesituations (for example, the need to report suspected childabuse) they may encounter during the course of theirevaluation activities.

• Provide feedback to the evaluator on whether instruments areappropriate for the target population and provide input duringthe evaluation plan phase.

• Keep the outside evaluator informed about changes in the

program's operations.

• Specify information the evaluator should include in the report.

• Assist in interpreting evaluation findings.

• Provide information to all staff about the evaluation process.

• Monitor the evaluation contract and completion of workproducts (such as reports).

• Ensure that program staff is fulfilling their responsibilities (such

as data collection).

• Supervise in-house evaluation activities, such as completion of data collection instruments, and data entry.

• Serve as a troubleshooter for the evaluation process, resolvingproblems or locating a higher level person in the agency whocan help.

• Request a debriefing from the evaluator at various timesduring the evaluation and at its conclusion.

Chapter 5: How Do You Prepare for an Evaluation?When you build a house, you start by laying the foundation. If yourfoundation is not well constructed, your house will eventuallydevelop cracks and you will be constantly patching them up.Preparing for an evaluation is like laying a foundation for a house.The effectiveness of an evaluation ultimately depends on how wellyou have planned it.

Begin preparing for the evaluation when you are planning theprogram, component, or service that you want to evaluate. Thisapproach will ensure that the evaluation reflects the program's

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 72/117

goals and objectives. The process of preparing for an evaluationshould involve the outside evaluator or consultant (if you decide tohire one), all program staff who are to be part of the evaluationteam, and anyone else in the agency who will be involved. Thefollowing steps are designed to help you build a strong foundation

for your evaluation.

Step 1: Decide what to evaluate. Programs vary in size and scope.Some programs have multiple components, whereas others haveonly one or two. You can evaluate your entire program, one or twoprogram components, or even one or two services or activitieswithin a component. To a large extent, your decision about what toevaluate will depend on your available financial and staff resources.If your resources are limited, you may want to narrow the scope of your evaluation. It is better to conduct an effective evaluation of a

single program component than to attempt an evaluation of severalcomponents or an entire program without sufficient resources.

Sometimes the decision about what to evaluate is made for you.This often occurs when funders require evaluation as a condition of a grant award. Funders may require evaluations of different types of programs including, but not limited to, demonstration projects.Evaluation of demonstration projects is particularly important tofunders because the purpose of these projects is to develop and testeffective program approaches and models.

At other times, you or your agency administrators will make thedecision about what to evaluate. As a general rule, if you areplanning to implement new programs, components, or services, youshould also plan to evaluate them. This step will help you determineat the outset whether your new efforts are implementedsuccessfully, and are effective in attaining expected participantoutcomes. It will also help identify areas for improvement.

If your program is already operational, you may decide you want toevaluate a particular service or component because you are unsureabout its effectiveness with some of your participants. Or, you maywant to evaluate your program because you believe it is effectiveand you want to obtain additional funding to continue or expand it.

Step 2: Build a model of your program. Whether you decide toevaluate an entire program, a single component, or a single service,you will need to build a model that clearly describes what you planto do. A model will provide a structural framework for yourevaluation. You will need to develop a clear picture of the particularprogram, component, or service to be evaluated so that everyoneinvolved has a shared understanding of what they are evaluating.

Building a model will help you with this task.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 73/117

There are a variety of types of models. The model discussed in thischapter focuses on the program's implementation and participantoutcome objectives. The model represents a series of logicallyrelated assumptions about the program's participant population andthe changes you hope to bring about in that population as a result

of your program. A sample completed program model and aworksheet that can be used to develop a model for your programappear at the end of this chapter. The program model includes thefollowing features.

Assumptions about your target population. Your assumptions aboutyour target population are the reasons why you decided to developa program, program component, or service. These assumptionsmay be based on theory, your own experiences in working with thetarget population, or your review of existing research or program

literature.

Using the worksheet, you would write your assumptions in column1. Some examples of assumptions about a participant populationthat could underlie development of a program and potentialresponses to these assumptions include the following:

Assumption: Children of parents who abuse alcohol or other drugsare at high risk for parental abuse or neglect.

»Response: Develop a program to work with families to address

substance abuse and child abuse problems simultaneously.Assumption: Runaways and homeless youth are at high risk forabuse of alcohol and other drugs.

»Response. Develop a program that provides drug abuseintervention or prevention services to runaway and homeless youth.

Assumption: Families with multiple interpersonal, social, andeconomic problems need early intervention to prevent thedevelopment of child maltreatment, family violence, alcohol andother drug (AOD) problems, or all three.

»Response: Develop an early intervention program that providescomprehensive support services to at-risk families.

Assumption. Children from low-income families are at high risk fordevelopmental, educational, and social problems.

»Response: Develop a program that enhances the developmental,educational, and social adjustment opportunities for children.

Assumption: Child protective services (CPS) workers do not have

sufficient skills for working with families in which substance abuseand child maltreatment coexist.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 74/117

»Response: Develop a training program that will expand theknowledge and skill base of CPS workers.

Program interventions (implementation objectives). The program'sinterventions or implementation objectives represent what you plan

to do to respond to the problems identified in your assumptions.They include the specific services, activities, or products you plan todevelop or implement. Using the worksheet, you can fill in yourprogram implementation objectives in column 2. Some examples of implementation objectives that correspond to the aboveassumptions include the following:

• Provide intensive in-home services to parents and children.• Provide drug abuse education services to runaway and

homeless youth.•

Provide in-home counseling and case management services tolow-income mothers with infants.• Provide comprehensive child development services to children

and families.• Provide multidisciplinary training to CPS workers.

Immediate outcomes (immediate participant outcomeobjectives). Immediate participant outcome objectives can beentered in column 3. These are your expectations about thechanges in participants' knowledge, attitudes, and behaviors thatyou expect to result from your intervention by the time participantscomplete the program. Examples of immediate outcomes linked tothe above interventions include the following:

• Parents will acknowledge their substance abuse problems.• Youth will demonstrate changes in their attitudes toward use

of alcohol and other drugs.• Mothers will increase their knowledge of infant development

and of effective and appropriate parenting practices.• Children will demonstrate improvements in their cognitive and

interpersonal functioning.• CPS workers will increase their knowledge about the

relationship between substance abuse and child maltreatmentand about the appropriate service approach for substance-abusing parents.

Intermediate outcomes. Intermediate outcomes, entered incolumn 4, represent the changes in participants that you think willfollow after immediate outcomes are achieved. Examples of intermediate outcomes include the following:

After parents acknowledge their AOD abuse problems, they willseek treatment to address this problem.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 75/117

After parents receive treatment for AOD abuse, there will be areduction in the incidence of child maltreatment.

After runaway and homeless youth change their attitudes towardAOD use, they will reduce this use.

After mothers have a greater understanding of child developmentand appropriate parenting practices, they will improve theirparenting practices with their infants.

After children demonstrate improvements in their cognitive andinterpersonal functioning, they will increase their ability tofunction at an age-appropriate level in a particular setting.

After CPS workers increase their knowledge about working withfamilies in which AOD abuse and child maltreatment coexist, they

will improve their skills for working with these families.

Anticipated program impact. The anticipated program impact,specified in the last column of the model, represents yourexpectations about the long-term effects of your program onparticipants or the community. They are derived logically from yourimmediate and intermediate outcomes. Examples of anticipatedprogram impact include the following:

After runaway and homeless youth reduce their AOD abuse, theywill seek services designed to help them resolve other

problems they may have.

After mothers of infants become more effective parents, the needfor out-of-home placements for their children will bereduced.

After CPS workers improve their skills for working with families inwhich AOD abuse and child maltreatment coexist, collaborationand integration of services between the child welfare and thesubstance abuse treatment systems will increase.

Program models are not difficult to construct, and they lay thefoundation for your evaluation by clearly identifying your programimplementation and participant outcome objectives. These modelscan then be stated in measurable terms for evaluation purposes.

Step 3: State your program implementation and participantoutcome objectives in measurable terms. The program modelserves as a basis for identifying your program's implementation andparticipant outcome objectives. Initially, you should focus yourevaluation on assessing whether implementation objectives andimmediate participant outcome objectives were attained. This task

will allow you to assess whether it is worthwhile to commit

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 76/117

additional resources to evaluating attainment of intermediate andfinal or long-term outcome objectives.

Remember, every program, component, or service can becharacterized by two types of objectives — implementation

objectives and outcome objectives. Both types of objectives willneed to be stated in measurable terms.

Often program managers believe that stating objectives inmeasurable terms means that they have to establish performancestandards or some kind of arbitrary "measure" that the programmust attain. This is not correct. Stating objectives in measurableterms simply means that you describe what you plan to do in yourprogram and how you expect the participants to change in a waywill allow you to measure these objectives. From this perspective,measurement can involve anything from counting the number of services (or determining the duration of services) to using astandardized test that will result in a quantifiable score. Someexamples of stating objectives in measurable terms are providedbelow.

Stating implementation objectives in measurable terms. Examplesof implementation objectives include the following:

What you plan to do — The services/activities you plan to provide orthe products you plan to develop, and the duration and intensity of 

the services or activities.Who will do it — What the staffing arrangements will be; thecharacteristics and qualifications of the program staff who willdeliver the services, conduct the training, or develop the products;and how these individuals will be recruited and hired.

Who you plan to reach and how many — A description of theparticipant population for the program; the number of participantsto be reached during a specific time frame; and how you plan torecruit or reach the participants.

These objectives are not difficult to state in measurable terms. Yousimply need to be specific about your program's operations. Thefollowing example demonstrates how general implementationobjectives can be transformed into measurable objectives.

General objective: Provide substance abuse prevention andintervention services to runaway youth.

» Measurable objectives:

What you plan to do — Provide eight drug abuse education class

sessions per year with each session lasting for 2 weeks andinvolving 2-hour classes convened for 5 days of each week.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 77/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 78/117

Based on your answers to these questions, you may decide torevise your recruitment strategies, train crisis interventioncounselors to be more effective in recruiting youth, visit the familyto encourage the youth's participation, or offer transportation toyouth to make it easier for them to attend the classes.

Stating participant outcome objectives in measurable terms. Thisprocess requires you to be specific about the changes in knowledge,attitudes, awareness, or behavior that you expect to occur as aresult of participation in your program. One way to be specific aboutthese changes is to ask yourself the following question:

How will we know that the expected changes occurred?

To answer this question, you will have to identify the evidenceneeded to demonstrate that your participants have changed. The

following examples demonstrate how participant outcome objectivesmay be stated in measurable terms. A worksheet for definingmeasurable participant outcome objectives appears at the end of this chapter.

General objective: We expect to improve the parenting skills of program participants.

»Measurable objective: Parents participating in the program willdemonstrate significant increases in their scores on an instrumentthat measures parenting skills from intake to completion of the

parenting education classes.

General objective: We expect to reduce the use of alcohol and otherdrugs by youth participating in the substance abuse interventionprogram.

»Measurable objective: Youth will indicate significant decreasesin their scores on an instrument that measures use of alcohol andother drugs from intake to after program participation.

General objective: We expect to improve CPS workers' ability to

work effectively with families in which child maltreatment andparental substance abuse problems coexist.

»Measurable objective: CPS workers will demonstrate significantincreases in their scores on instruments that measure knowledge of substance abuse and child maltreatment issues and skills forworking with these families from before to after training.

General objective: We expect to reduce the risk of childmaltreatment for children in the families served.

»Measurable objective: Families served by the program will besignificantly less likely than a similar group of families to be

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 79/117

reported for child maltreatment for 6 months after they completethe program.

Step 4: Identify the context for your evaluation. Part of planning foran evaluation requires understanding the context in which the

evaluation will take place. Think again about building a house.Before you can design your house, you need to know somethingabout your lot. If your lot is on a hill, you must consider the slope of the hill when you design your house. If there are numerous trees onthe lot, you must design your house to accommodate the trees.

Similarly, program evaluations do not take place in a vacuum, andthe context of an evaluation must be considered before theevaluation can be planned and designed. Although many contextualfactors can affect your evaluation, the most common factors pertainto your agency, your staff, and your participant population.

The agency context. The characteristics of an agency implementinga program affects both the program and the evaluation. The aspectsof your agency that need to be considered in preparing for yourevaluation include the following:

The agency's evaluation-related resources. Does the agency have amanagement information system in place that can be used to collectdata on participants and services? Does the agency have anadvisory board that includes members who have experience

evaluating programs? Does the agency have discretionary funds inthe budget that can be used for an evaluation?

The agency's history of conducting program evaluations. Has theagency evaluated its programs before? If yes, was the experience anegative or positive one? If it was negative, what were theproblems encountered and how can they be avoided in the currentevaluation? Are the designs of previous agency evaluationsappropriate for the evaluation you are currently planning?

If the agency has a history of program evaluation, you may be able

to use the previous evaluation designs and methodology for yourcurrent evaluation. Review these with your outside evaluator orconsultant to determine whether they are applicable to your currentneeds. If they are applicable, this will save you a great deal of timeand money.

The program's relationship to other agency activities. Is theprogram you want to evaluate integrated into other agencyactivities, or does it function as a separate entity? What are therelationships between the program and other agency activities? If itis integrated, how will you evaluate it apart from other agency

activities? This can be a complicated process. If your evaluationteam does not include someone who is an experienced evaluator,

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 80/117

you may need assistance from an outside consultant to help youwith this task.

The staff context. The support and full participation of program staff in an evaluation is critical to its success. Sometimes evaluations are

not successfully implemented because program staff who areresponsible for data collection do not consistently administer orcomplete evaluation forms, follow the directions of the evaluationteam, or make concerted efforts to track participants after they leftthe program. The usual reason for staff-related evaluation problemsis that staff were not adequately prepared for the evaluation orgiven the opportunity to participate in its planning anddevelopment. Contextual issues relevant to program staff includethe following:

The staff's experiences in participating in program evaluations.Have your staff participated in evaluations prior to this one? If yes,was the experience a positive or negative one? If no, how much dothey know about the evaluation process and how much training willthey need to participate as full partners in the evaluation?

If staff have had negative experiences with evaluation, you willneed to work with them to emphasize the positive aspects of evaluation and to demonstrate how this evaluation will be differentfrom prior ones. All staff will need careful training if they are to beinvolved in any evaluation activities, and this training should be

reinforced throughout the duration of the evaluation.

The staff's attitudes toward evaluation. Do your staff have positiveor negative attitudes toward evaluation? If negative, what can bedone to make them more positive? How can they be encouraged tosupport and participate fully in the evaluation?

Negative attitudes sometimes can be counteracted when programmanagers demonstrate enthusiasm for the evaluation and whenevaluation activities are integrated with program activities. It maybe helpful to demonstrate to staff how evaluation instruments also

can be used as assessment tools for participants and therefore helpstaff develop treatment plans or needs assessments for individualparticipants.

The staff's knowledge about evaluation. Are your staff knowledgeable about the practices and procedures required for aprogram evaluation? Do any staff members have a background inconducting evaluations that could help you with the process?

Staff who are knowledgeable about evaluation practices andprocedures can be a significant asset to an evaluation. They can

assume some of the evaluation tasks and help train and superviseother staff on evaluation activities.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 81/117

The participant population context. Before designing an evaluation,it is very important to understand the characteristics of yourparticipant population. The primary issue relevant to the participantpopulation context concerns the potential diversity of your programpopulation. For example, is the program population similar or

diverse with respect to age, gender, ethnicity, socioeconomicstatus, and literacy levels? If the population is diverse, how can theevaluation address this diversity?

Participant diversity can present a significant challenge to anevaluation effort. Instruments and methods that may beappropriate for some participants may not be for others. Forexample, written questionnaires may be easily completed by someparticipants, but others may not have adequate literacy levels.Similarly, face-to-face interviews may be appropriate for some of 

the cultural groups the program serves, but not to others.

If you serve a diverse population of participants, you may need tobe flexible in your data collection methods. You may design aninstrument, for example, that can be administered either as awritten instrument or as an interview instrument. You also mayneed to have your instruments translated into different languages.However, it is important to remember that just translating aninstrument does not necessarily mean that it will be culturallyappropriate.

If you serve a particular cultural group, you may need to select theindividuals who are to collect the evaluation information from thesame cultural or ethnic group as your participants. If you areconcerned about the literacy levels of your population, you will needto pilot test your instruments to make sure that participantsunderstand what is being asked of them. More information relatedto pilot tests appears in Chapter 7.

Identifying contextual issues is essential to building a solidfoundation for your evaluation. During this process, you will want toinvolve as many members of your expected evaluation team aspossible. The decisions you make about how to address thesecontextual issues in your evaluation will be fundamental to ensuringthat the evaluation operates successfully and that its design andmethodology are appropriate for your participant population.

After you have completed these initial steps, it is time to "frame"your house. To frame a house, you need blueprints that detail theplans for the house. The blueprint for an evaluation is theevaluation plan. Chapter 6 discusses the elements that go intobuilding this plan.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 82/117

Chapter 6: What Should You Include in an Evaluation Plan?If you decided to build a house, you probably would hire anarchitect to design the house and draw up the plans. Although it ispossible to build a house without hiring an architect, thisprofessional knows what is and is not structurally possible andunderstands the complex issues relevant to setting the foundationand placing the pipes, ducts, and electrical wires. An architect alsoknows what materials to use in various parts of the house and thetypes of materials that are best. However, an architect cannotdesign the house for you unless you tell him or her what you want.

An evaluation plan is a lot like an architect's plans for a house. It isa written document that specifies the evaluation design and detailsthe practices and procedures to use to conduct the evaluation. Justas you would have an architect develop the plans for your house, it

is a good idea to have an experienced evaluator develop the plansfor your evaluation. Similarly, just as an architect cannot designyour house without input from you, an experienced evaluatorcannot develop an effective evaluation plan without assistance fromyou and your staff. The evaluator has the technical expertise, butyou and your staff have the program expertise. Both are necessaryfor a useful evaluation plan.

If you plan to hire an outside evaluator to head your evaluationteam, you may want to specify developing the evaluation plan asone of the evaluator's responsibilities, with assistance from you andprogram staff. If you plan to conduct an in-house evaluation and donot have someone on your evaluation team who is an experiencedevaluator, this is a critical point at which to seek assistance from anevaluation consultant. The consultant can help you prepare theevaluation plan to ensure that your design and methodology aretechnically correct and appropriate for answering the evaluationquestions.

This chapter provides information about the necessary ingredientsto include in an evaluation plan. This information will help you:

• Work with an experienced evaluator (either an outsideevaluator or someone within your agency) to develop theplan.

• Review the plan that an outside evaluator has developed tomake sure all the ingredients are included.

• Understand the kinds of things that are required in anevaluation and why your outside evaluator or evaluationconsultant has chosen a specific design or methodology.

An evaluation plan should be developed at least 2 to 3 monthsbefore the time you expect to begin the evaluation so that you have

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 83/117

ample time to have the plan reviewed, make any necessarychanges, and test out information collection procedures andinstruments before collecting data.

Do not begin collecting evaluation information until the plan is

completed and the instruments have been pilot-tested. A sampleevaluation plan outline that may be used as a guide appears at theend of this chapter. The major sections of the outline are discussedbelow.

Section I. The evaluation framework 

This section can be used to present the program model (discussedin Chapter 5), program objectives, evaluation questions, and thetimeframe for the evaluation (when collection of evaluationinformation will begin and end). It also should include a discussion

of the context for the evaluation, particularly the aspects of theagency, program staff, and participants that may affect theevaluation (also discussed in Chapter 5). If an outside evaluator ispreparing the plan, the evaluator will need your help to prepare thissection.

Section II. Evaluating implementation objectives - procedures and

methods

This section should provide detailed descriptions of the practicesand procedures that will be used to answer evaluation questions

pertaining to your program's implementation objectives. (Areimplementation objectives being attained and, if not, why not?What barriers were encountered? What has facilitated attainment of objectives?)

Types of information needed. In an evaluation, information isoften referred to as data. Many people think that the term "data"refers to numerical information. In fact, data can be facts, statistics,or any other items of information. Therefore, any information that iscollected about your program or participants can be consideredevaluation data.

The types of information needed will be guided by the objective youassess. For example, when the objective refers to what you plan todo, you must collect information on the types of services, activities,or educational/training products that are developed andimplemented; who received them; and their duration and intensity.

When the objective pertains to who will do it, you must collectinformation on the characteristics of program staff (including theirbackground and experience), how they were recruited and hired,

their job descriptions, the training they received to perform their

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 84/117

jobs, and the general staffing and supervisory arrangements for theprogram.

When the objective concerns who will participate, you must collectinformation about the characteristics of the participants, the

numbers of participants, how they were recruited, barriersencountered in the recruitment process, and factors that facilitatedrecruitment.

Sources of necessary information. This refers to where, or fromwhom, you will obtain evaluation information. Again, the selectionof sources will be guided by the objective you are assessing. Forexample:

• Information on services can come from program records orfrom interviews with program staff.

• Information on staff can come from program records,interviews with agency administrators, staff themselves, andprogram managers.

• Information on participants and recruitment strategies cancome from program records and interviews with program staff and administrators.

• Information about barriers and facilitators to implementingthe program can come from interviews with relevant programpersonnel.

This section of the plan also should include a discussion of howconfidentiality of information will be maintained. You will need todevelop participant consent forms that include a description of theevaluation objectives and how the information will be used. Asample participant consent form is provided at the end of thischapter.

How sources of information will be selected. If your programhas a large number of staff members or participants, the time andcost of the evaluation can be reduced by including only a sample of 

these staff or participants as sources for evaluation information. If you decide to sample, you will need the assistance of anexperienced evaluator to ensure that the sampling procedures resultin a group of participants or staff that are appropriate for yourevaluation objectives. Sampling is a complicated process, and if youdo not sample correctly you run the risk of not being able togeneralize your evaluation results to your participant population asa whole.

There are a variety of methods for sampling your sources.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 85/117

• You can sample by identifying a specific timeframe forcollecting evaluation-related information and including onlythose participants who were served during that timeframe.

• You can sample by randomly selecting the participants (orstaff) to be used in the evaluation. For example, you might

assign case numbers to participants and include only theeven-numbered cases in your evaluation.

• You can sample based on specific criteria, such as length of time with the program (for staff) or characteristics of participants.

Methods for collecting information. For each implementationobjective you are assessing, the evaluation plan must specify howinformation will be collected (the instruments and procedures) andwho will collect it. To the extent possible, collection of evaluation

information should be integrated into program operations. Forexample, in direct services programs, the program's intake,assessment, and termination forms could be designed so that theyare useful for evaluation purposes as well as for program purposes.

In training programs, the registration forms for participants can beused to collect evaluation-related information as well as provideinformation relevant to conducting the training. If your programuses a management information system (MIS) to track services andparticipants, it is possible that it will incorporate much of the

information that you need for your evaluation.There are a number of methods for collecting information includingstructured and open-ended interviews, paper and pencil inventoriesor questionnaires, observations, and systematic reviews of programor agency records or documents. The methods you select willdepend upon the following:

• The evidence you need to establish that your objectives wereattained

• Your sources

• Your available resources

Chapter 7 provides more information on these methods. Theinstruments or forms that you will use to collect evaluationinformation should be developed or selected as part of theevaluation plan. Do not begin an evaluation until all of the datacollection instruments are selected or developed. Again, instrumentdevelopment or selection can be a complex process and yourevaluation team may need assistance from an experiencedevaluator for this task.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 86/117

Confidentiality. An important part of implementing an evaluationis ensuring that your participants are aware of what you are doingand that they are cooperating with the evaluation voluntarily.People should be allowed their privacy, and this means they havethe right to refuse to give any personal or family information, the

right to refuse to answer any questions, and even the right torefuse to be a part of the evaluation at all.

Explain the evaluation activities and what will be required of themas part of the evaluation effort. Tell them that their name will not beused and that the information they provide will not be linked tothem. Then, have them sign an informed consent form thatdocuments that they understand the scope of the evaluation, knowwhat is expected of them, agree (or disagree) to participate, andunderstand they have the right to refuse to give any information.

They should also understand that they may drop out of theevaluation at any time without losing any program services. If children are involved, you must get the permission of their parentsor guardians concerning their participation in the evaluation.

A sample informed consent form appears at the end of this chapter.Sometimes programs will have participants complete this form atthe same time that they complete forms agreeing to participate inthe program, or agreeing to let their children participate. Thisreduces the time needed for the evaluator to secure informedconsent.

Timeframe for collecting information. Although you will havealready specified a general timeframe for the evaluation, you willneed to specify a time frame for collecting data relevant to eachimplementation objective. Times for data collection will again beguided by the objective under assessment.You should be sure toconsider collecting evaluation at the same time for all participants;for example, after they have been in the program for 6 months.

Methods for analyzing information. This section of an evaluationplan describes the practices and procedures for use in analyzing theevaluation information. For assessing program implementation, theanalyses will be primarily descriptive and may involve tabulatingfrequencies (of services and participant characteristics) andclassifying narrative information into meaningful categories, such astypes of barriers encountered, strategies for overcoming barriers,and types of facilitating factors. An experienced evaluator can helpyour evaluation team design an analysis plan that will maximize thebenefits of the evaluation for the program and for program staff.More information on analyzing program implementation informationis provided in Chapter 8.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 87/117

Section III. Evaluating participant outcome objectives

The practices and procedures for evaluating attainment of participant outcome objectives are similar to those for evaluatingimplementation objectives. However, this part of your evaluation

plan will need to address a few additional issues.Selecting your evaluation design. A plan for evaluatingparticipant outcome objectives must include a description of theevaluation design. Again, the assistance of an experiencedevaluator (either an outside evaluator, consultant, or someonewithin your agency) is critical at this juncture.

The evaluation design must allow you to answer these basicquestions about your participants:

Did program participants demonstrate changes in knowledge,attitudes, behaviors, or awareness?• Were the changes the result of the program's interventions

Two commonly used evaluation designs are:

• Pre-intervention and post-intervention assessments• Pre-intervention and post-intervention assessments using a

comparison or control group

A pre- and post-intervention design involves collecting informationonly on program participants. This information is collected at leasttwice: once before participants begin the program and again eitherimmediately or some time after they complete or leave theprogram. You can collect outcome information as often as you likeafter participants enter the program, but you must collectinformation on participants before they enter the program. This iscalled baseline information and is essential for demonstrating that achange occurred.

If you are implementing an education or training program, this type

of design can be effective for evaluating immediate changes inparticipants' knowledge and attitudes. In these types of programs,you can assess participants' knowledge and attitudes prior to thetraining and immediately after training with some degree of certainty that any observed changes resulted from yourinterventions.

However, if you want to assess longer-term outcomes of trainingand education programs or any outcomes of service deliveryprograms, the pre-intervention and post-intervention design byitself is not recommended. Collecting information only on programparticipants does not allow you to answer the question: Wereparticipant changes the result of program interventions? The

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 88/117

changes may have occurred as a result of other interventions, orare changes that might have occurred without any intervention atall.

To be able to attribute participant changes to your program's

intervention, yo need to use a pre- and post-intervention designthat incorporates a comparison or control group. In this design, twogroups of individuals are included in your evaluation.

• The treatment group (individuals who participate in yourprogram).

• The non treatment group (individuals who are similar to thosein the treatment group, but who do not receive the sameservices as the treatment group.

The non treatment group is called a control group if all eligibleprogram participants are randomly assigned to the treatment andnon treatment groups. Random assignment means that members of both groups can be assumed to be similar with respect to all keycharacteristics except program participation. Thus, potential sourcesof biases are "controlled." A comparison group is a non treatmentgroup where you do not randomly assign people. A comparisongroup could be families from another program, children fromanother school, or former program participants.

Using a control group greatly strengthens your evaluation, but there

are barriers to implementing this design option. Program staff amyview random assignment as unethical because it deprives eligibleparticipants of needed services. As a result, staff sometimes willprioritize legible participants rather than use random assignment, orstaff may simply refuse to assign individuals to the control group.Staff from other agencies may also feel random assignment isunethical and may refuse to refer individuals to your program.

To avoid these potential barriers, educate staff from you programand from other agencies in your community about the benefits of 

the random assignment process. No one would argue with the belief that it is important to provide services to individuals who needthem. However, it is also important to find out if those servicesactually work. The random assignment process helps you determinewhether or not your program's services are having the anticipatedeffect on participants. Staff from your program and form otheragencies also must be informed that random assignment does notmean that control group members cannot receive any services ortraining. They may participate in the program after the evaluationdata have been collected, or they may receive other types of services or training.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 89/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 90/117

changes can be attributed to the program. A more detaileddiscussion on analyzing information on participant outcomes isprovided in Chapter 8.

Section IV. Procedures for managing and monitoring the evaluation

This section of the evaluation plan can be used to describe thepractices and procedures you expect to use to manage theevaluation. If staff are to be responsible for data collection, you willneed to describe how they will be trained and monitored. You maywant to develop a data collection manual that staff can use. This willensure consistency in information collection and will be useful forstaff who are hired after the evaluation begins. Chapter 7 discussesvarious types of evaluation monitoring activities.

This final section of the evaluation plan also should include a

discussion of how changes in program operations will be handled inthe evaluation. For example, if a particular service or programcomponent is discontinued or added to the program, you will needto have procedures for documenting the time that this changeoccurred, the reasons for the change, and whether particularparticipants were involved in the program prior to or after thechange. This will help determine whether the change had anyimpact on attainment of expected outcomes.

Once you and your experienced evaluator have completed theevaluation plan, it is a good idea to have it reviewed by selectedindividuals for their comments and suggestions. Potential reviewersinclude the following:

• Agency administrators who can determine whether theevaluation plan is consistent with the agency's resources andevaluation objectives.

• Program staff who can provide feedback on whether theevaluation will involve an excessive burden for them andwhether it is appropriate for program participants.

• Advisory board members who can assess whether the

evaluation will provide the type of information most importantto know.

• Participants and community members who can determine if the evaluation instruments and procedures are culturallysensitive and appropriate.

After the evaluation plan is complete and the instruments pilottested, you are ready to begin collecting evaluation information.Because this process is so critical to the success of an evaluation,the major issues pertaining to information collection discussed in

more detail in the following chapter.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 91/117

Sample Outline for Evaluation Plan

I. Evaluation frameworkA. What you are going to evaluate

1. Program model (assumptions about targetpopulation, interventions, immediate outcomes,intermediate outcomes, and final outcomes)

2. Program implementation objectives (stated ingeneral and then measurable terms)a. What you plan to do and howb. Who will do itc. Participant population and recruitment

strategies3. Participant outcome objectives (stated in general

and then measurable terms)

4. Context for the evaluationB. Questions to be addressed in the evaluation

1. Are implementation objectives being attained? If not, why (that is, what barriers or problems havebeen encountered)? What kinds of thingsfacilitated implementation?

2. Are participant outcome objectives beingattained? If not, why (that is, what barriers orproblems have been encountered)? What kinds of things facilitated attainment of participant

outcomes?a. Do participant outcomes vary as a function

of program features? (That is, which aspectsof the program are most predictive of expected outcomes?)

b. Do participant outcomes vary as a functionof characteristics of the participants or staff?

C. Timeframe for the evaluation1. When data collection will begin and end2. How and why timeframe was selected

II. Evaluating implementation objectives - procedures andmethods

III. (question 1: Are implementation objectives being attained,and if not, why not?)A. Objective 1 (state objective in measurable terms)

1. Type of information needed to determine if objective 1 is being attained and to assessbarriers and facilitators

2. Sources of information (that is, where you plan toget the information including staff, participants,

program documents). Be sure to include your

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 92/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 93/117

evaluation-related feedback to program managers andstaff 

Sample Informed Consent Form

We would like you to participate in the Evaluation of  [programname]. Your participation is important to us and will help us assessthe effectiveness of the program. As a participant in [programname] we will ask you to [complete a questionnaire, answer questions in an interview, or other task].

We will keep all of your answers confidential. Your name will neverbe included in any reports and none of your answers will be linkedto you in any way. The information that you provide will becombined with information from everyone else participating in thestudy.

[If information/data collection includes questions relevant tobehaviors such as child abuse, drug abuse, or suicidal behaviors,the program should make clear its potential legal obligation toreport this information - and that confidentiality may be broken inthese cases. Make sure that you know what your legal reportingrequirements are before you begin your evaluation.]

You do not have to participate in the evaluation. Even if you agreeto participate now, you may stop participating at any time or refuseto answer any question. Refusing to be part of the evaluation will

not affect your participation or the services you receive in [programname].

If you have any questions about the study you may call [name and telephone number of evaluator, program manager or community advocate].

By signing below, you confirm that this form has been explained toyou and that you understand it.

Please Check One:

AGREE TO PARTICIPATE

DO NOT AGREE TO PARTICIPATE

Signed: __________________________________________Participant or Parent/Guardian

Date: __________________________________________

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 94/117

Chapter 7: How Do You Get the Information You Need for Your Evaluation?

As Chapter 6 noted, a major section of your evaluation planconcerns evaluation information - what kinds of information you

need, what the sources for this information will be, and whatprocedures you use to collect it. Because these issues are so criticalto the success of your evaluation effort, they are discussed in moredetail in this chapter.

In a program evaluation, the information you collect is similar to thematerials you use when you build a house. If you were to build ahouse, you would be very concerned about the quality of thematerials used. High-quality materials ensure a strong and durablehouse. In an evaluation, the quality of the information you collectalso affects its strength and durability. The higher the quality of the

information collected, the better the evaluation.

At the end of the chapter, there are two worksheets to help youplan out the data collection process. One is a sample worksheetcompleted for a drug abuse prevention program for runaway andhomeless youth, and the other is a blank worksheet that you andyour evaluation team can complete together. The following sectionscover each column of the worksheet.

What specific information do you need to address objectives?

Using the worksheet, fill in your program implementation (orparticipant outcome objectives) in column 1. Make sure that theseobjectives are stated in measurable terms. Your statement of objectives in measurable terms will determine the kinds of information you need and will avoid the problem of collecting moreinformation than is actually necessary.

Next, complete column 2 by specifying the information thataddresses each objective. This information is sometimes referred toas the data elements. For example, if two of your measurableparticipant outcome objectives are to improve youth's grades and

scores on academic tests and reduce their incidence of behavioralproblems as reported by teachers and student self-reports, you willneed to collect the following information:

• Student grades• Academic test scores• Number of behavior or discipline reports• Teacher assessments of classroom behaviors• Student self-assessments of classroom behaviors

These items are the data elements.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 95/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 96/117

Another strategy is to identify existing information on yourparticipants. Although your program may not collect certaininformation, other programs and agencies may. You might want toseek the cooperation of other agencies to obtain their data, ordevelop a collaboration that supports your evaluation.

What are the most effective data collection instruments?

Column 4 identifies the instruments that you will use to collect thedata from specified sources. Some options for information collectioninstruments include the following:

• Written surveys or questionnaires• Oral interviews (either in person or on the telephone) or focus

group interviews (either structured or unstructured)• Extraction forms to be used for written records (such as case

records or existing databases)• Observation forms or checklists to be used to assess

participants' or staff members' behaviors

The types of instruments selected should be guided by your dataelements. For example, information on barriers or facilitators toprogram implementation would be best obtained through oralinterviews with program administrators and staff. Information onservices provided may be more accurate if obtained by using a caserecord or program log extraction form.

Information on family functioning may be best obtained throughobservations or questionnaires designed to assess particular aspectsof family relationships and behaviors. Focus group interviews arenot always useful for collecting information on individual participantoutcomes, but may be used effectively to assess participants'perceptions of a program.

Instruments for evaluating program implementationobjectives. Your evaluation team will probably need to developinstruments to collect information on program implementation

objectives. This is not a complicated process. You must payattention to your information needs and potential sources anddevelop instruments designed specifically to obtain that informationfrom that source. For example, if you want to collect information onplanned services and activities from program planners, it is possibleto construct an interview instrument that includes the followingquestions:

Why was the decision made to develop (the particular service or activity)? 

Who was involved in making this decision? 

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 97/117

What plans were made to ensure the cultural relevancy of (theparticular service or activity)? 

If case records or logs are viewed as appropriate sources forevaluation information, you will need to develop a case record or

program log extraction form. For example, if you want to collectinformation on actual services or activities, you may design arecords extraction form that includes the following items:

How many times was (the particular activity or service) provided toeach participant? 

Who provided or implemented (the particular activity or service)? 

What was the intensity of (the particular activity or service)? (How long was it provided for each participant at each time)? 

What was the duration of (the particular activity or service)? (What was the timeframe during which the participant received or participated in the activity or service?)

Instruments for evaluating participant outcome objectives.Participant outcome objectives can be assessed using a variety of instruments, depending on your information needs. If yourevaluation team decides to use interview instruments, observations,or existing records to collect participant outcome information, youwill probably need to develop these instruments. In these

situations, you would follow the same guidelines as you would useto develop instruments to assess program implementationobjectives.

If your evaluation team decides to use questionnaires orassessment inventories to collect information on participantoutcomes, you have the option of selecting existing instruments ordeveloping your own. Many existing instruments can be used toassess participant outcomes, particularly with respect to child abusepotential, substance use, family cohesion, family stress, behavioralpatterns, and so on. It is not possible to identify specificinstruments or inventories in this manual as particularly noteworthyor useful, because the usefulness of an instrument depends to alarge extent on the nature of your program and your participantoutcome objectives. If you do not have someone on your evaluationteam who is knowledgeable regarding existing assessmentinstruments, this would be a critical time to enlist the assistance of an outside consultant to identify appropriate instruments. Someresources for existing instruments are provided in the appendix.

There are advantages and disadvantages to using existing

instruments. The primary advantages of using existinginstruments or inventories are noted on the following page:

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 98/117

They often, but not always, are standardized. This means thatthe instrument has been administered to a very large populationand the scores have been "normed" for that population. When aninstrument has been "normed," it means that a specified range of scores is considered "normal," whereas scores in another range are

considered "non-normal." Non-normal scores on instrumentsassessing child abuse potential, substance use, family cohesion, andthe like may be indicators of potential problem behaviors.

They usually, but not always, have been established as valid and reliable. An instrument is valid if it measures what it issupposed to measure. It is reliable if individuals' responses to theinstrument are consistent over time or within the instrument.

The primary disadvantages of using existing instruments are asfollows:

They are not always appropriate for all cultural or ethnic populations. Scores that are "normed" on one cultural group maynot reflect the norm of members of another cultural group.Translating the instrument into another language is not sufficient tomake it culturally appropriate. The items and scoring system mustreflect the norms, values, and traditions of the given cultural group.

They may not be useful for your program. Your participantoutcome objectives and the interventions you developed to attain

those objectives may not match what is being assessed by astandardized instrument. For example, if you want to evaluate theeffects that a tutoring program has on runaway and homelessyouth, an instrument measuring depression may not be useful.

If an outside consultant selects an instrument for your programevaluation, make sure that you and other members of theevaluation team review each item on the instrument to ensure thatthe information it asks for is consistent with your expectationsabout how program participants will change.

If your evaluation team is unable to find an appropriate existinginstrument to assess participant outcome objectives, they will needto develop one. Again, if there is no one on your team who hasexpertise in developing assessment instruments, you will need theassistance of an outside consultant for this task.

Whether you decide to use an existing instrument or develop one,the instrument used should meet the following criteria:

It should measure a domain addressed by your program. If you are providing parenting training, you would want an instrument

to measure changes in parenting knowledge, skills, and behaviors,

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 99/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 100/117

in large part on the type of program and the characteristics of theparticipants. Training and education programs, for example, mayhave participants complete instruments in a group setting. Servicedelivery programs may find it more appropriate to individuallyadminister instruments.

Everyone involved in collecting evaluation information must betrained in data collection procedures. Training should include:

An item-by-item review of each of the instruments to be used indata collection, including a discussion of the meaning of each item,why it was included in the instrument, and how it is to becompleted 

A review of all instructions on administering or using theinstruments, including instructions to the respondents

A discussion of potential problems that may arise in administeringthe instrument, including procedures for resolving the problems

A practice session during which data collection staff administer theinstrument to one another, use it to extract information fromexisting case records or program logs, or complete it themselves, if it is a written questionnaire

A discussion of respondent confidentiality, including administeringan informed consent form, answering respondents' questions about 

confidentiality, keeping completed instruments in a safe place, and procedures for submitting instruments to the appropriate person

A discussion of the need for frequent reviews and checks of thedata and for meetings of data collectors to ensure data collectioncontinues to be consistent.

It is useful to develop a manual that describes precisely what isexpected in the information collection process. This will be a handyreference for data collection staff and will be useful for new staff who were hired after the initial evaluation training occurred.

What can be done to ensure the effectiveness of instruments and

procedures?

Even after you have selected or constructed the instruments andtrained the data-collection staff, you are not yet ready to begincollecting data. Before you can actually begin collecting evaluationinformation, you must "pilot test" your instruments and procedures.The pilot test will determine whether the instruments andprocedures are effective - that they obtain the information neededfor the evaluation, without being excessively burdensome to the

respondents, and that they are appropriate for the programparticipant population.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 101/117

You may pilot test your instruments on a small sample of programrecords or individuals who are similar to your program participants.You can use a sample of your own program's participants who willnot participate in the actual evaluation or a group of participants inanother similar program offered by your agency or by another

agency in your community.

The kinds of information that can be obtained from a pilot testinclude:

How long it takes to complete interviews, extract information fromrecords, or fill out questionnaires

Whether self-administered questionnaires can be completed byparticipants without assistance from staff 

Whether the necessary records are readily available, complete, andconsistently maintained

Whether the necessary information can be collected in theestablished time frame

Whether instruments and procedures are culturally appropriate

Whether the notification procedures (letters, informed consent, andthe like) are easily implemented and executed

To the extent possible, pilot testing should be done by data

collection staff. Ask them to take notes and make comments on theprocess of administering or using each instrument. Then reviewthese notes and comments to determine whether changes areneeded in the instruments or procedures. As part of pilot testing,instruments should be reviewed to assess the number of incompleteanswers, unlikely answers, comments on items that may beincluded in the margins, or other indicators that revisions arenecessary.

In addition, you can ask questions of participants after the pilot test

to obtain their comments on the instruments and procedures.Frequently, after pilot testing the evaluation team will need toimprove the wording of some questions or instructions to therespondent and delete or add items.

How can you monitor data collection activities?

Once data collection begins, this task will require careful monitoringto ensure consistency in the process. Nothing is more damaging toan evaluation effort than information collection instruments thathave been incorrectly or inconsistently administered, or that are

incomplete.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 102/117

There are various activities that can be undertaken as part of themonitoring process.

Establish a routine and timeframe for submitting completedinstruments. This may be included in your data collection manual.

It is a good idea to have instruments submitted to the appropriatemember of the evaluation team immediately after completion. Thatperson can then review the instruments and make sure that theyare being completed correctly. This will allow problems to beidentified and resolved immediately. You may need to retrain somemembers of the staff responsible for data collection or have a groupmeeting to re-emphasize a particular procedure or activity.

Conduct random observations of the data collection process.A member of the evaluation team may be assigned theresponsibility of observing the data collection process at varioustimes during the evaluation. This person, for example, may sit in onan interview session to make sure that all of the procedures arebeing correctly conducted.

Conduct random checks of respondents. As an additionalquality control measure, someone on the evaluation team may beassigned the responsibility of checking with a sample of respondentson a routine basis to determine whether the instruments wereadministered in the expected manner. This individual may askrespondents if they were given the informed consent form to sign

and if it was explained to them, where they were interviewed,whether their questions about the interview were answered, andwhether they felt the attitude or demeanor of the interviewer wasappropriate.

Keep completed interview forms in a secure place. This willensure that instruments are not lost and that confidentiality ismaintained. Completed data collection instruments should not beleft lying around, and access to this information should be limited.You may want to consider number-coding the forms rather thanusing names, though keeping a secured data base that connects thenames to numbers.

Encourage staff to view the evaluation as an important partof the program. If program staff are given the responsibility fordata collection, they will need support from you for this activity.Their first priority usually is providing services or training toparticipants and collecting evaluation information may not bevalued. You will need to emphasize to your staff that the evaluationis part of the program and that evaluation information can helpthem improve their services or training to participants.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 103/117

Once evaluation information is collected, you can begin to analyzeit. To maximize the benefits of the evaluation to you, program staff,and program participants, this process should take place on anongoing basis or at specified intervals during the evaluation.Information on the procedures for analyzing and interpreting

evaluation information are discussed in the following chapter.

Chapter 8: How Do You Make Sense of EvaluationInformation?

For evaluation information to be useful, it must be analyzed andinterpreted. Many program managers and staff are intimidated bythis activity, believing that it is best left to an expert. This is onlypartially true. If your evaluation team does not include someonewho is experienced in analyzing qualitative and quantitative

evaluation data, you will need to seek the assistance of an outsideconsultant for this task. However, it is important for you and allother members of the evaluation team to participate in the analysisactivities. This is the only way to ensure that the analyses willanswer your evaluation questions, not ones that an outsideconsultant may want to answer.

Think again about building a house. You may look at a set of blueprints and see only a lot of lines, numbers, and arrows. Butwhen a builder looks at the blueprints, this person sees exactlywhat needs to be done to build the house and understands all of thetechnical requirements. This is why most people hire an expert tobuild one. However, hiring an expert builder does not mean thatyou do not need to participate in the building process. You need tomake sure that the house the builder is working on is the house youwant, not one that the builder wants.

This chapter will not tell you how to analyze evaluation data.Instead, it provides some basic information about differentprocedures for analyzing evaluation data to help you understandand participate more fully in this process. There are many ways to

analyze and interpret evaluation information. The methodsdiscussed in this chapter are not the only methods one can use.Whatever methods the evaluation team decides to use, it isimportant to realize that analysis procedures must be guided by theevaluation questions. The following evaluation questions arediscussed throughout this manual:

Are program implementation objectives being attained? If not, why not? What types

of things were barriers to or facilitated attaining program implementation objectives?

Are participant outcome objectives being attained? If not, why not?What types of things were barriers to or facilitated attaining

participant outcome objectives?

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 104/117

The following sections discuss procedures for analyzing evaluationinformation to answer both of these questions.

Analyzing information about program implementation objectives

In this manual, the basic program implementation objectives havebeen described as follows:

• What you plan to do• Who will do it• Whom you plan to reach (your expected participant

population) and with what intensity and duration• How many you expect to reach

You can analyze information about attainment of programimplementation using a descriptive process. You describe what you

did (or are doing), who did it, and the characteristics and number of participants. You then compare this information to your initialobjectives and determine whether there is a difference betweenobjectives and actual implementation. This process will answer thequestion: Were program implementation objectives attained?

If there are differences between your objectives and your actualimplementation, you can analyze your evaluation information toidentify the reasons for the differences. This step answers thequestion: If not, why not?

You also can use your evaluation information to identify barriersencountered  to implementation and factors that facilitated implementation. This information can be used to "tell the story" of your program's implementation. An example of how this informationmight be organized for a drug abuse prevention program forrunaway and homeless youth is provided in a table at the end of this chapter. The table represents an analysis of the program'smeasurable implementation objective concerning what theprogram plans to do.

You may remember that the measurable objectives introduced asexamples in this manual for what you plan to do for the drugabuse prevention program were the following:

• The program will provide eight drug abuse education classsessions per year.

• Each session will last for 2 weeks.• Each 2-week session will involve 2 hours of classes per day.• Classes will be held for 5 days of each week of the session.

In the table, these measurable objectives appear in the firstcolumn. The actual program implementation information is provided

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 105/117

in the second column. For this program, there were differencesbetween objectives and actual implementation for three of the fourmeasurable objectives. Column 3 notes the presence or absence of differences, and column 4 provides the reasons for those changes.

Columns 5 and 6 in the table identify the barriers encountered andthe facilitating factors. These are important to identify whether ornot implementation objectives were attained. They provide thecontext for understanding the program and will help you interpretthe results of your analyses.

By reviewing the information in this table, you would be able to saythe following things about your program:

The program implemented only six drug abuse prevention sessionsinstead of the intended eight sessions.

» The fewer than expected sessions were caused by a delay instartup time.

» The delay was caused by the difficulty of recruiting and hiringqualified staff, which took longer than expected.

» With staff now on board, we expect to be able to implement thefull eight sessions in the second year.

» Once staff were hired, the sessions were implemented smoothly

because there were a number of volunteers who providedassistance in organizing special events and transporting participantsto the events.

Although the first two sessions were conducted for 2 weeks each, as intended, the

remaining sessions were conducted for only 1 week.

» The decreased duration of the sessions was caused by thedifficulty of maintaining the youth's interest during the 2-weekperiod.

» Attendance dropped considerably during the second week, usually

because of lack of interest, but sometimes because youth weremoved to other placements or returned home.

» Attendance during the first week was maintained because of theavailability of youth residing in the shelter.

For the first two sessions the class time was 2 hours per day, as originally intended.

After the number of sessions was decreased, the class time was increased to 3 hours

per day.

» The increase was caused by the need to cover the curriculummaterial during the session.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 106/117

» The extensive experience of the staff, and the assistance of volunteers, facilitated covering the material during the 1-weekperiod.

» The youth's interest was high during the 1-week period.

The classes were provided for 5 days during the 1-week period, as intended.

» This schedule was facilitated by staff availability and the access toyouth residing in the shelter.

» It was more difficult to get youth from crisis intervention servicesto attend for all 5 days.

Information on this implementation objective will be expanded asyou conduct a similar analysis of information relevant to the otherimplementation objectives of staffing (who will do it) and the

population (number and characteristics of participants).

As you can see, if this information is provided on an on-going basis,it will provide opportunities for the program to improve itsimplementation and better meet the needs of program participants.

Analyzing information about participant outcome objectives

The analysis of participant outcome information must be designedto answer two questions:

Did the expected changes in participants' knowledge, attitudes, behavior, or awareness

occur?If changes occurred, were they the result of your program's interventions?

Another question that can be included in your analysis of participantoutcome information is:

Did some participants change more than others and, if so, whatexplains this difference? (For example, characteristics of theparticipants, types of interventions, duration of interventions,intensity of interventions, or characteristics of staff.)

Your evaluation plan must include a detailed description of how you

will analyze information to answer these questions. It is veryimportant to know exactly what you want to do before you begincollecting data, particularly the types of  statistical proceduresthat you will use to analyze participant outcome information.

Understanding statistical procedures. Statistical procedures areused to understand changes occurring among participants as agroup. In many instances, your program participants may varyconsiderably with respect to change. Some participants may changea great deal, others may change only slightly, and still others maynot change or may change in an unexpected direction. Statistical

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 107/117

procedures will help you assess the overall effectiveness of yourprogram and its effectiveness with various types of participants.

Statistical procedures also are important tools for an evaluationbecause they can determine whether the changes demonstrated by

your participants are the result of a chance occurrence or arecaused by the variables (program or procedure) being assessed.This is called statistical significance. Usually, a change may beconsidered statistically significant (not just a chance occurrence) if the probability of its happening by chance is less than 5 in 100cases. However, in some situations, evaluators may set otherstandards for establishing significance, depending on the nature of the program, what is being measured, and the number of participants.

Another use for statistical procedures is determining the similaritybetween your treatment and nontreatment group members. This isparticularly important if you are using a comparison group ratherthan a control group as your nontreatment group. If a comparisongroup is to be used to establish that participant changes were theresult of your program's interventions and not some other factors,you must demonstrate that the members of the comparison groupare similar to your participants in every key way except for programparticipation.

Statistical procedures can be used to determine the extent of 

similarity of group members with respect to age, gender,socioeconomic status, marital status, race or ethnicity, or otherfactors.

Statistical tests are a type of statistical procedure that examine therelationships among variables in an analysis. Some statistical testsinclude a dependent variable,one or more independent variables,and potential mediating or conditioning variables.

Dependent variables are your measures of the knowledge,attitude, or behavior that you expect will change as a result of your

program. For example, if you expect parents to increase theirscores on an instrument measuring understanding of childdevelopment or effective parenting, the scores on that instrumentare the dependent variable for the statistical analyses.

Independent variables refer to your program interventions orelements. For example, the time of data collection (before and afterprogram participation), the level of services or training, or theduration of services may be your independent variables.

Mediating or conditioning variables are those that may affect

the relationship between the independent variable and the

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 108/117

dependent variable. These are factors such as the participant'sgender, socioeconomic status, age, race, or ethnicity.

Most statistical tests assess the relationships among independentvariables, dependent variables, and mediating variables. The

specific question answered by most statistical tests is: Does thedependent variable vary as a function of levels of the independentvariable? For example, do scores on an instrument measuringunderstanding of child development vary as a function of when theinstrument was administered (before and after the program)? Inother words, did attendance at your program's child developmentclass increase parents' knowledge?

Most statistical tests can also answer whether any other factorsaffected the relationship between the independent and dependentvariables. For example, was the variation in scores from before toafter the program affected by the ages of the persons taking thetest, their socioeconomic status, their ethnicity, or other factors?The more independent and mediating variables you include in yourstatistical analyses, the more you will understand about yourprogram's effectiveness.

As an example, you could assess whether parents' scores on aninstrument measuring understanding of child development differedas a result of the time of instrument administration (at intake andat program exit), the age of the parent, and whether or not they

completed the full program.

Suppose your statistical test indicates that, for your population as awhole, understanding of child development did not changesignificantly as a result of the time of instrument administration.That is, "program exit" scores were not significantly higher than"program intake" scores. This finding would presumably indicatethat you were not successful in attaining this expected participantoutcome.

However, lack of a significant change among your 

participants as a group does not necessarily rule out program effectiveness. If you include the potential mediatingvariable of age in your analysis, you may find that older mothers(ages 25 to 35) did demonstrate significant differences in before-and-after program scores but younger mothers (ages 17 to 24years) did not. This would indicate that your program'sinterventions are effective for the older mothers in your targetpopulation, but not for the younger ones. You may then want toimplement different types of interventions for the younger mothers,or you may want to limit your program recruitment to older

mothers, who seem to benefit from what you are doing. And youwould not have known this without the evaluation!

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 109/117

If you added the variable of whether or not participants completedthe full program, you may find that those who completed theprogram were more likely to demonstrate increases in scores thanmothers who did not complete the program and, further, that oldermothers were more likely to complete the program than younger

mothers. Based on this finding, you may want to find out why theyounger mothers were not completing the program so that you candevelop strategies for keeping younger mothers in the program.

 

Using the results of your analyses

The results of your analyses can answer your initial evaluationquestions.

Are participant outcome objectives being attained?

If not, why not?

What factors contributed to attainment of objectives?

What factors were barriers to attainment of objectives?

These questions can be answered by interpreting the results of thestatistical procedures performed on the participant outcomeinformation. However, to fully address these questions, you will alsoneed to look to the results of the analysis of programimplementation information. This will provide a context for

interpreting statistical results.

For example, if you find that one or more of your participantoutcome objectives is not being attained, you may want to explainthis finding. Sometimes you can look to your analysis of programimplementation information to understand why this may havehappened. You may find, for example, that your program wassuccessful in attaining the outcome of an increase in parents'knowledge about child development, but was not successful inattaining the behavioral outcome of improved parenting skills.

In reviewing your program implementation information, you mayfind that some components of your program were successfullyimplemented as intended, but that the home-based counselingcomponent of the program was not fully implemented as intended -and that the problems encountered in implementing the home-based counseling component included difficulty in recruitingqualified staff, extensive staff turnover in the counselor positions,and insufficient supervision for staff. Because the participantoutcome most closely associated with this component wasimproving parenting skills, the absence of changes in this behavior

may be attributable to the problems encountered in implementingthis objective.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 110/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 111/117

Preparing an evaluation report for program funders

The report to program funders will probably be the mostcomprehensive report you prepare. Often program funders will useyour report to demonstrate the effectiveness of their grant

initiatives and to support allocation of additional moneys for similarprograms. A report that is useful for this purpose will need toinclude detailed information about the program, the evaluationdesign and methods, and the types of data analyses conducted.

A sample outline for an evaluation report for program funders isprovided in this chapter. The outline is developed for a "final report"and assumes all the information collected on your program hasbeen analyzed. However, this outline may also be used for interimreports, with different sections completed at various times duringthe evaluation and feedback provided to program personnel on the

ongoing status of the evaluation.

Preparing an evaluation report for program staff and agency

personnel

An evaluation report for program staff and agency personnel maybe used to support management decisions about ongoing or futureprogram efforts. This type of report may not need to include asmuch detail on the evaluation methodology but might focus insteadon findings. The report could include the information noted inoutline Sections II E (description of results of analysis of 

implementation information), III D (discussion of issues thataffected the outcome evaluation and how they were addressed), IIIF (results of data analysis on participant outcome information), IIIG (discussion of results), and IV C (discussion of potentialrelationships between implementation and outcome evaluationresults).

Preparing an evaluation report for potential funders and advocacy

organizations

It is unlikely that potential funders (including State legislatures and

national and local foundations) or advocacy organizations will wantto read a lengthy report. In a report for this audience, you maywant to focus on the information provided in Section IV of theoutline. This report would consist of only a summary of bothprogram implementation and participant outcome objectives and adiscussion of the relationships between implementation policies,practices, procedures, and participant outcomes.

Disseminating the results of your evaluation

In addition to producing formal evaluation reports, you may want to

take advantage of other opportunities to share what you havelearned with others in your community or with the field in general.

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 112/117

You might want to consider drafting letters to community healthand social services agencies or other organizations that may beinterested in the activities and results of your work. Other ways tolet people know what you have done include the following:

• Producing press releases and articles for local professionalpublications, such as newsletters and journals

• Making presentations at meetings on the results of yourprogram at the local health department, university or publiclibrary, or other setting

• Listing your evaluation report or other evaluation-relatedpublications in relevant databases, on electronic bulletinboards, and with clearinghouses

• Making telephone calls and scheduling meetings with similarprograms to share your experience and results

Many of the resource materials listed in the appendix of this manualcontain ideas and guidelines for producing different types of informational materials related to evaluations.

Sample OutlineFinal Evaluation Report

Executive Summary

1. Introduction: General Description of the Project (1 page)A. Description of program components, including services

or training delivered and target population for eachservice

B. Description of collaborative efforts (if relevant),including the agencies participating in the collaborationand their various roles and responsibilities in the project

C. Description of strategies for recruiting programparticipants (if relevant)

D. Description of special issues relevant to serving theproject's target population (or providing education and

training to participants) and plans to address them1. Agency and staffing issues2. Participants' cultural background, socioeconomic

status, literacy levels, and other characteristics2. Evaluation of Program Implementation Objectives

A. Description of the project's implementationobjectives(measurable objectives)>

1. What you planned to do (plannedservices/interventions/training/education;duration and intensity of each

service/intervention/training period)

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 113/117

2. Whom you planned to have do it (planned staffingarrangements and qualifications/characteristics of staff)

3. Target population (intended characteristics andnumber of members of the target population to be

reached by each service/intervention/training/education effort and how you planned to recruitparticipants)

4. Description of the project's objectives forcollaborating with community agencies

a. Planned collaborative arrangementsb. Services/interventions/training provided by

collaborating agenciesB. Statement of evaluation questions (Were program

implementation objectives attained? If not, why not?

What were the barriers to and facilitators of attainingimplementation objectives?)

C. Examples: D. How successful was the project in implementing a

parenting education class for mothers with substanceabuse problems? What were the policies, practices, andprocedures used to attain this objective? What were thebarriers to, and facilitators of attaining this objective?

E. How successful was the project in recruiting theintended target population and serving the expected

number of participants? What were the policies,practices, and procedures used to recruit and maintainparticipants in the project? What were the barriers to,and facilitators of attaining this objective?

F. How successful was the project in developing andimplementing a multidisciplinary training curriculum?What were the practices and procedures used todevelop and implement the curriculum? What were thebarriers to, and facilitators of attaining this objective?

G. How successful was the project in establishing

collaborative relationships with other agencies in thecommunity? What were the policies, practices, andprocedures used to attain this objective? What were thebarriers to, and facilitators of attaining this objective?

H. Description of data collection methods and datacollected for each evaluation question

1. Description of data collected2. Description of methodology of data collection3. Description of data sources (such as project

documents, project staff, project participants, and

collaborating agency staff)4. Description of sampling procedures

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 114/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 115/117

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 116/117

C. Procedures for data analysesD. Results of data analyses

1. Significant and negative analyses results(including statement of established level of significance) for each outcome evaluation

question2. Promising, but inconclusive analyses results3. Issues/problems relevant to the analyses4. Examples: 5. Issues relevant to data collection

procedures, particularly consistency inmethods and consistency across datacollectors 

6. Issues relevant to the number of participants served by the project and those

included in the analysis 7. Missing data or differences in size of sample

for various analyses E. Discussion of results

1. Interpretation of results for each evaluationquestion, including any explanatoryinformation from the process evaluation

a. The effectiveness of the project inattaining a specific outcome objective

b. Variables associated with attainment

of specific outcomes, such ascharacteristics of the population,characteristics of the service provideror trainer, duration, or intensity of services or training, andcharacteristics of the service ortraining

2. Issues relevant to interpretation of resultsB. Integration of Process and Outcome Evaluation

Information

A. Summary of process evaluation resultsB. Summary of outcome evaluation resultsC. Discussion of potential relationships between

program implementation and participant outcomeevaluation results

D. Examples: E. Did particular policies, practices, or procedures

used to attain program implementation objectiveshave different effects on participant outcomes?  

F. How did practices and procedures used to recruit 

and maintain participants in services affect participant outcomes?  

8/7/2019 Monitoring & Evaluation Detailed Note

http://slidepdf.com/reader/full/monitoring-evaluation-detailed-note 117/117

G. What collaboration practices and procedures werefound to be related to attainment of expected community outcomes?  

H. Were particular training modules more effective

than others in attaining expected outcomes for 

participants? If so, what were the features of these modules that may have contributed to their effectiveness (such as characteristics of thetrainers, characteristics of the curriculum, theduration and intensity of the services)?  

C. Recommendations to Program Administrators orFunders for Future Program and Evaluation Efforts

D. Examples: E. Based on the evaluation findings, it is recommended 

that the particular service approach developed for this

program be used to target mothers who are 25 years of age or older. Younger mothers do not appear to benefit from this type of approach. 

F. The evaluation findings suggest that traditional educational services are not as effective as self-esteembuilding services in promoting attitude changes amongadolescents regarding substance abuse. We recommend that future program development focus on providingthese types of services to youth at risk for substanceabuse. 

G. Based on the evaluation findings, it is recommended that funders provide sufficient funding for evaluationthat will permit a long-term follow-up assessment of participants. The kinds of participant changes that theprogram may bring about may not be observable until 3or 6 months after they leave the program. 


Recommended