+ All Categories
Home > Documents > Evaluation & Monitoring Detailed Note

Evaluation & Monitoring Detailed Note

Date post: 30-May-2018
Category:
Upload: habibqayumi
View: 218 times
Download: 0 times
Share this document with a friend

of 117

Transcript
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    1/117

    Glossarybaseline data - Initial information on program participants or otherprogram aspects collected prior to receipt of services or programintervention. Baseline data are often gathered through intakeinterviews and observations and are used later for comparingmeasures that determine changes in your participants, program, orenvironment.

    bias - (refers to statistical bias). Inaccurate representation thatproduces systematic error in a research finding. Bias may result inoverestimating or underestimating certain characteristics of thepopulation. It may result from incomplete information or invalidcollection methods and may be intentional or unintentional.

    comparison group - Individuals whose characteristics (such asrace/ethnicity, gender, and age) are similar to those of yourprogram participants. These individuals may not receive anyservices, or they may receive a different set of services, activities,or products. In no instance do they receive the same service(s) asthose you are evaluating. As part of the evaluation process, theexperimental (or treatment) group and the comparison group areassessed to determine which type of services, activities, or productsprovided by your program produced the expected changes.

    confidentiality - Since an evaluation may entail exchanging or

    gathering privileged or sensitive information about individuals, awritten form that assures evaluation participants that informationprovided will not be openly disclosed nor associated with them byname is important. Such a form ensures that their privacy will bemaintained.

    consultant - An individual who provides expert or professionaladvice or services, often in a paid capacity.

    control group - A group of individuals whose characteristics (suchas race/ethnicity, gender, and age) are similar to those of yourprogram participants, but do not receive the program (services,products, or activities) you are evaluating. Participants arerandomly assigned to either the treatment (or program) group andthe control group. A control group is used to assess the effect of your program on participants as compared to similar individuals notreceiving the services, products, or activities you are evaluating.The same information is collected for people in the control group asin the experimental group.

    cost-benefit analysis - A type of analysis that involves comparing

    the relative costs of operating a program (program expenses, staff salaries, etc.) to the benefits (gains to individuals or society) it

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    2/117

    generates. For example, a program to reduce cigarette smokingwould focus on the difference between the dollars expended forconverting smokers into nonsmokers with the dollar savings fromreduced medical care for smoking related disease, days lost fromwork, and the like.

    cost effectiveness analysis - A type of analysis that involvescomparing the relative costs of operating a program with the extentto which the program met it goals and objectives. For example, aprogram to reduce cigarette smoking would estimate the dollarsthat had to be expended in order to convert each smoker into anonsmoker.

    cultural relevance - Demonstration that evaluation methods,procedures, and or instruments are appropriate for the culture(s) towhich they are applied. (Other terms include cultural competency,cultural sensitivity).

    culture - The shared values, traditions, norms, customs, arts,history, institutions, and experience of a group of people. The groupmay be identified by race, age, ethnicity, language, national origin,religion, or other social category or grouping.

    data - Specific information or facts that are collected. A data item isusually a discrete or single measure. Examples of data items mightinclude age, date of entry into program, or reading level. Sources of

    data may include case records, attendance records, referrals,assessments, interviews, and the like.

    data analysis - The process of systematically applying statisticaland logical techniques to describe, summarize, and compare datacollected.

    data collection instruments - Forms used to collect informationfor your evaluation. Forms may include interview instruments,intake forms, case logs, and attendance records. They may bedeveloped specifically for your evaluation or modified from existing

    instruments. A professional evaluator can help select those that aremost appropriate for your program.

    data collection plan - A written document describing the specificprocedures to be used to gather the evaluation information or data.The plan describes who collects the information, when and where itis collected, and how it is to be obtained.

    database - An accumulation of information that has beensystematically organized for easy access and analysis. Databasestypically are computerized.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    3/117

    design - The overall plan and specification of the approachexpected in a particular evaluation. The design describes how youplan to measure program components and how you plan to use theresulting measurements. A pre- and post-intervention design withor without a comparison or control group is the design needed toevaluate participant outcome objectives.

    evaluation - A systematic method for collecting, analyzing, andusing information to answer basic questions about your program. Ithelps to identify effective and ineffective services, practices, andapproaches.

    evaluator - An individual trained and experienced in designing andconducting an evaluation that uses tested and accepted researchmethodologies.

    evaluation plan - A written document describing the overallapproach or design you anticipate using to guide your evaluation. Itincludes what you plan to do, how you plan to do it, who will do it,when it will be done, and why the evaluation is being conducted.The evaluation plan serves as a guide for the evaluation.

    evaluation team -The individuals, such as the outside evaluator,evaluation consultant, program manager, and program staff whoparticipate in planning and conducting the evaluation. Teammembers assist in developing the evaluation design, developing

    data collection instruments, collecting data, analyzing data, andwriting the report.

    exit data - Information gathered after an individual leaves yourprogram. Exit data are often compared to baseline data. Forexample, a Head Start program may complete a developmentalassessment of children at the end of the program year to measure achild's developmental progress by comparing developmental statusat the beginning and end of the program year.

    experimental group - A group of individuals receiving the

    treatment or intervention being evaluated or studied. Experimentalgroups (also known as treatment groups) are usually compared to acontrol or comparison group.

    focus group - A group of 7-10 people convened for the purpose of obtaining perceptions or opinions, suggesting ideas, orrecommending actions. A focus group is a method of collecting datafor evaluation purposes.

    formative evaluation - A type of process evaluation of newprograms or services that focuses on collecting data on program

    operations so that needed changes or modifications can be made tothe program in its early stages. Formative evaluations are used to

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    4/117

    provide feedback to staff about the program components that areworking and those that need to be changed.

    immediate outcomes - The changes in program participants,knowledge, attitudes, and behavior that occur early in the course of

    the program. They may occur at certain program points, or atprogram completion. For example, acknowledging substance abuseproblems is an immediate outcome.

    impact evaluation - A type of outcome evaluation that focuses onthe broad, longer-term impacts or results of a program. Forexample, an impact evaluation could show that a decrease in acommunity's overall infant mortality rate was the direct result of aprogram designed to provide early prenatal care.

    in-kind service - Time or services donated to your program.

    informed consent - A written agreement by program participantsto voluntarily participate in an evaluation or study after having beenadvised of the purpose of the study, the type of information beingcollected, and how the information will be used.

    instrument - A tool used to collect and organize information.Includes written instruments or measures, such as questionnaires,scales, and tests.

    intermediate outcomes - Results or outcomes of a program or

    treatment that may require some time before they are realized. Forexample, part-time employment would be an intermediate outcomeof a program designed to assist at-risk youth in becoming self-sufficient.

    internal resources - An agency's or organization's resourcesincluding staff skills and experiences and any information youalready have available through current program activities.

    intervention - The specific services, activities, or productsdeveloped and implemented to change or improve program

    participants' knowledge, attitudes, behaviors, or awareness.

    logic model - See the definition for program model.

    management information system (MIS) - An informationcollection and analysis system, usually computerized, that facilitatesaccess to program and participant information. It is usuallydesigned and used for administrative purposes. The types of information typically included in an MIS are service deliverymeasures, such as session, contacts, or referrals; staff caseloads;client sociodemographic information; client status; and treatmentoutcomes. Many MIS can be adapted to meet evaluationrequirements.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    5/117

    measurable terms - Specifying, through clear language, what it isyou plan to do and how you plan to do it. Stating time periods foractivities, "dosage" or frequency information (such as three 1-hourtraining sessions), and number of participants helps to make projectactivities measurable.

    methodology - The way in which you find out information; amethodology describes how something will be (or was) done. Themethodology includes the methods, procedures, and techniquesused to collect and analyze information.

    monitoring - The process of reviewing a program or activity todetermine whether set standards or requirements are being met.Unlike evaluation, monitoring compares a program to an ideal orexact state.

    objective - A specific statement that explains how a program goalwill be accomplished. For example, an objective of the goal toimprove adult literacy could be to provide tutoring to participants ona weekly basis for 6 months. An objective is stated so that changes,in this case, an increase in a specific type of knowledge, can bemeasured and analyzed. Objectives are written using measurableterms and are time-limited.

    outcome - Outcomes are a result of the program, services, orproducts you provide and refer to changes in knowledge, attitude,

    or behavior in participants. They are referred to as participantoutcomes in this manual.

    outcome evaluation - Evaluation designed to assess the extent towhich a program or intervention affects participants according tospecific variables or data elements. These results are expected tobe caused by program activities and tested by comparison of resultsacross sample groups in the target population. Also known asimpact and summative evaluation.

    outcome objectives - The changes in knowledge, attitudes,

    awareness, or behavior that you expect to occur as a result of implementing your program component, service, or activity. Alsoknown as participant outcome objectives.

    outside evaluator - An evaluator not affiliated with your agencyprior to the program evaluation. Also known as a third-partyevaluator.

    participant - An individual, family, agency, neighborhood,community, or State, receiving or participating in services providedby your program. Also known as a client or target population group.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    6/117

    pilot test - Preliminary test or study of your program or evaluationactivities to try out procedures and make any needed changes oradjustments. For example, an agency may pilot test new datacollection instruments that were developed for the evaluation.

    posttest - A test or measurement taken after a service orintervention takes place. It is compared with the results of a pretestto show evidence of the effects or changes as a result of the serviceor intervention being evaluated.

    pretest - A test or measurement taken before a service orintervention begins. It is compared with the results of a posttest toshow evidence of the effects of the service or intervention beingevaluated. A pretest can be used to obtain baseline data.

    process evaluation - An evaluation that examines the extent towhich a program is operating as intended by assessing ongoingprogram operations and whether the targeted population is beingserved. A process evaluation involves collecting data that describesprogram operations in detail, including the types and levels of services provided, the location of service delivery, staffing;sociodemographic characteristics of participants; the community inwhich services are provided, and the linkages with collaboratingagencies. A process evaluation helps program staff identify neededinterventions and/or change program components to improveservice delivery. It is also called formative or implementation

    evaluation.program implementation objectives - What you plan to do inyour program, component, or service. For example, providingtherapeutic child care for 15 children, giving them 2 hot meals perday, are referred to as program implementation objectives.

    program model (or logic model) - A diagram showing the logicor rationale underlying your particular program. In other words, it isa picture of a program that shows what it is supposed toaccomplish. A logic model describes the links between programobjectives, program activities, and expected program outcomes.

    qualitative data - Information that is difficult to measure, count,or express in numerical terms. For example, a participant'simpression about the fairness of a program rule/requirement isqualitative data.

    quantitative data - Information that can be expressed innumerical terms, counted or compared on a scale. For example,improvement in a child's reading level as measured by a readingtest.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    7/117

    random assignment - The assignment of individuals in the pool of all potential participants to either the experimental (treatment) orcontrol group in such a manner that their assignment to a group isdetermined entirely by chance.

    reliability - Extent to which a measurement (such as an instrumentor a data collection procedure) produces consistent results overrepeated observations or administrations of the instrument underthe same conditions each time. It is also important that reliability bemaintained across data collectors; this is call interrater reliability.

    sample - A subset of participants selected from the total studypopulation. Samples can be random (selected by chance, such asevery 6th individual on a waiting list) or nonrandom (selectedpurposefully, such as all 2-year olds in a Head Start program).

    standardized instruments - Assessments, inventories,questionnaires, or interviews, that have been tested with a largenumber of individuals and are designed to be administered toprogram participants in consistent manner. Results of tests withprogram participants can be compared to reported results of thetests used with other populations.

    statistical procedures - The set of standards and rules based instatistical theory, by which one can describe and evaluate what hasoccurred.

    statistical test - Type of statistical procedure, such as a t-test orZ-score, that is applied to data to determine whether your resultsare statistically significant (i.e., the outcome is not likely to haveresulted by chance alone).

    summative evaluation - A type of outcome evaluation thatassesses the results or outcomes of a program. This type of evaluation is concerned with a program's overall effectiveness.

    treatment group - Also called an experimental group, a treatmentgroup is composed of a group of individuals receiving the services,products, or activities (interventions) that you are evaluating.

    validity - The extent to which a measurement instrument or testaccurately measures what it is supposed to measure. For example,a reading test is a valid measure of reading skills, but is not a validmeasure of total language competency.

    variables - Specific characteristics or attributes, such as behaviors,age, or test scores, that are expected to change or vary. Forexample, the level of adolescent drug use after being exposed to adrug prevention program is one variable that may be examined inan evaluation.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    8/117

    Introduction to EvaluationEvaluation is a methodological area that is closely related to, butdistinguishable from more traditional social research. Evaluation utilizes manyof the same methodologies used in traditional social research, but becauseevaluation takes place within a political and organizational context, it requiresgroup skills, management ability, political dexterity, sensitivity to multiplestakeholders and other skills that social research in general does not rely onas much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

    Definitions of Evaluation

    Probably the most frequently given definition is:

    Evaluation is the systematic assessment of the worth or merit of someobject

    This definition is hardly perfect. There are many types of evaluations that donot necessarily result in an assessment of worth or merit -- descriptivestudies, implementation analyses, and formative evaluations, to name a few.Better perhaps is a definition that emphasizes the information-processing andfeedback functions of evaluation. For instance, one might say:

    Evaluation is the systematic acquisition and assessment of informationto provide useful feedback about some object

    Both definitions agree that evaluation is a systematic endeavor and both usethe deliberately ambiguous term 'object' which could refer to a program,policy, technology, person, need, activity, and so on. The latter definitionemphasizes acquiring and assessing information rather than assessing worthor merit because all evaluation work involves collecting and sifting throughdata, making judgements about the validity of the information and of

    inferences we derive from it, whether or not an assessment of worth or meritresults.

    The Goals of Evaluation

    The generic goal of most evaluations is to provide "useful feedback" to avariety of audiences including sponsors, donors, client-groups, administrators,staff, and other relevant constituencies. Most often, feedback is perceived as"useful" if it aids in decision-making. But the relationship between anevaluation and its impact is not a simple one -- studies that seem criticalsometimes fail to influence short-term decisions, and studies that initiallyseem to have no influence can have a delayed impact when more congenialconditions arise. Despite this, there is broad consensus that the major goal of

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    9/117

    evaluation should be to influence decision-making or policy formulationthrough the provision of empirically-driven feedback.

    Evaluation Strategies

    'Evaluation strategies' means broad, overarching perspectives on evaluation.They encompass the most general groups or "camps" of evaluators; although,at its best, evaluation work borrows eclectically from the perspectives of allthese camps. Four major groups of evaluation strategies are discussed here.

    Scientific-experimental models are probably the most historically dominantevaluation strategies. Taking their values and methods from the sciences --especially the social sciences -- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated.Included under scientific-experimental models would be: the tradition of experimental and quasi-experimental designs; objectives-based research thatcomes from education; econometrically-oriented perspectives including cost-effectiveness and cost-benefit analysis; and the recent articulation of theory-driven evaluation.

    The second class of strategies are management-oriented systems models .Two of the most common of these are PERT , the P rogram Evaluation andReview Technique, and CPM , the Critical P ath Method. Both have beenwidely used in business and government in this country. It would also belegitimate to include the Logical Framework or "Logframe" model developedat U.S. Agency for International Development and general systems theory and

    operations research approaches in this category. Two management-orientedsystems models were originated by evaluators: the UTOS model where Ustands for Units, T for Treatments, O for Observing Observations and S for Settings; and the CIPP model where the C stands for Context, the I for Input,the first P for Process and the second P for Product. These management-oriented systems models emphasize comprehensiveness in evaluation,placing evaluation within a larger framework of organizational activities.

    The third class of strategies are the qualitative/anthropological models .They emphasize the importance of observation, the need to retain thephenomenological quality of the evaluation context, and the value of

    subjective human interpretation in the evaluation process. Included in thiscategory are the approaches known in evaluation as naturalistic or 'FourthGeneration' evaluation; the various qualitative schools; critical theory and artcriticism approaches; and, the 'grounded theory' approach of Glaser andStrauss among others.

    Finally, a fourth class of strategies is termed participant-oriented models .As the term suggests, they emphasize the central importance of theevaluation participants, especially clients and users of the program or technology. Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    10/117

    With all of these strategies to choose from, how to decide? Debates that ragewithin the evaluation profession -- and they do rage -- are generally battlesbetween these different strategists, with each claiming the superiority of their position. In reality, most good evaluators are familiar with all four categoriesand borrow from each as the need arises. There is no inherent incompatibility

    between these broad strategies -- each of them brings something valuable tothe evaluation table. In fact, in recent years attention has increasingly turnedto how one might integrate results from evaluations that use differentstrategies, carried out from different perspectives, and using differentmethods. Clearly, there are no simple answers here. The problems arecomplex and the methodologies needed will and should be varied.

    Types of Evaluation

    There are many different types of evaluations depending on the object beingevaluated and the purpose of the evaluation. Perhaps the most importantbasic distinction in evaluation types is that between formative andsummative evaluation. Formative evaluations strengthen or improve theobject being evaluated -- they help form it by examining the delivery of theprogram or technology, the quality of its implementation, and the assessmentof the organizational context, personnel, procedures, inputs, and so on.Summative evaluations, in contrast, examine the effects or outcomes of someobject -- they summarize it by describing what happens subsequent todelivery of the program or technology; assessing whether the object can besaid to have caused the outcome; determining the overall impact of the causalfactor beyond only the immediate target outcomes; and, estimating the

    relative costs associated with the object.Formative evaluation includes several evaluation types:

    needs assessment determines who needs the program, how great theneed is, and what might work to meet the need

    evaluability assessment determines whether an evaluation is feasibleand how stakeholders can help shape its usefulness

    structured conceptualization helps stakeholders define the programor technology, the target population, and the possible outcomes

    implementation evaluation monitors the fidelity of the program or

    technology delivery process evaluation investigates the process of delivering the program

    or technology, including alternative delivery procedures

    Summative evaluation can also be subdivided:

    outcome evaluations investigate whether the program or technologycaused demonstrable effects on specifically defined target outcomes

    impact evaluation is broader and assesses the overall or net effects --intended or unintended -- of the program or technology as a whole

    cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs andvalues

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    11/117

    secondary analysis reexamines existing data to address newquestions or use methods not previously employed

    meta-analysis integrates the outcome estimates from multiple studiesto arrive at an overall or summary judgement on an evaluation question

    Evaluation Questions and Methods

    Evaluators ask many different kinds of questions and use a variety of methodsto address them. These are considered within the framework of formative andsummative evaluation as presented above.

    In formative research the major questions and methodologiesare:

    What is the definition and scope of the problem or issue, or what's the

    question?

    Formulating and conceptualizing methods might be used includingbrainstorming, focus groups, nominal group techniques, Delphi methods,brainwriting, stakeholder analysis, synectics, lateral thinking, input-outputanalysis, and concept mapping.

    Where is the problem and how big or serious is it?

    The most common method used here is "needs assessment" which caninclude: analysis of existing data sources, and the use of sample surveys,

    interviews of constituent populations, qualitative research, expert testimony,and focus groups.

    How should the program or technology be delivered to address theproblem?

    Some of the methods already listed apply here, as do detailing methodologieslike simulation techniques, or multivariate methods like multiattribute utilitytheory or exploratory causal modeling; decision-making methods; and projectplanning and implementation methods like flow charting, PERT/CPM, andproject scheduling.

    How well is the program or technology delivered?

    Qualitative and quantitative monitoring techniques, the use of managementinformation systems, and implementation assessment would be appropriatemethodologies here.

    The questions and methods addressed under summativeevaluation include:

    What type of evaluation is feasible?

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    12/117

    Evaluability assessment can be used here, as well as standard approachesfor selecting an appropriate evaluation design.

    What was the effectiveness of the program or technology?

    One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental andexperimental designs for determining whether observed effects canreasonably be attributed to the intervention and not to other sources.

    What is the net impact of the program?

    Econometric methods for assessing cost effectiveness and cost/benefitswould apply here, along with qualitative methods that enable us to summarizethe full range of intended and unintended impacts.

    Clearly, this introduction is not meant to be exhaustive. Each of thesemethods, and the many not mentioned, are supported by an extensivemethodological research literature. This is a formidable set of tools. But theneed to improve, update and adapt these methods to changing circumstancesmeans that methodological research and development needs to have a major place in evaluation work.

    The Nature of Monitoring and EvaluationDefinition and Purpose

    This explains what monitoring is and the purposes it serves

    .. What is Monitoring?..

    Monitoring is the regular observation andrecording of activities taking place in a project or program. It is a process of routinely gathering

    information on all aspects of the project.

    .

    To monitor is to check on howproject activities areprogressing. It is observation;

    systematic and purposefulobservation.

    .. Monitoring also involves giving feedbackabout the progress of the project to thedonors, implementers and beneficiaries of the project.

    .

    Reporting enables the gatheredinformation to be used in makingdecisions for improving projectperformance.

    .. Purpose of Monitoring:.. Monitoring is very important

    in project planning andimplementation. .

    It is like watching where you are going while riding a

    bicycle; you can adjust as you go along and ensurethat you are on the right track...

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    13/117

    Monitoring provides information that will be useful in:

    Analyzing the situation in the community and its project; Determining whether the inputs in the project are well utilized; Identifying problems facing the community or project and finding solutions; Ensuring all activities are carried out properly by the right people and in time; Using lessons from one project experience on to another; and

    Determining whether the way the project was planned is the most appropriate

    way of solving the problem at hand.

    Monitoring, Planning and Implementation

    Integrating the Monitoring at All StagesMonitoring is an integral part of every project, from start to finish..

    A project is a series of activities(investments ) that aim at solvingparticular problems within agiven time frame and in aparticular location.

    .

    The investments include time, money, humanand material resources. Before achieving theobjectives, a project goes through severalstages. Monitoring should take place at and beintegrated into all stages of the project cycle.

    .. The three basic stages include:

    Project planning ( situation analysis, problem identification, definitionof the goal, formulating strategies, designing a work plan, and budgeting );

    Project implementation ( mobilization, utilization and control of resources and project operation) ; and

    Project evaluation. .. Monitoring should be executed by allindividuals and institutions whichhave an interest ( stake holders ) in theproject.

    .

    To efficiently implement a project, the peopleplanning and implementing it should plan for all the interrelated stages from thebeginning.

    ..

    In the "Handbook for Mobilizers," we said the key questions of planning and management were: (1) What do we want? (2)What do we have? (3) How do we use what we have to getwhat we want? and (4) What will happen when we do?

    .

    They can be modified,

    using "where," insteadof "what," while theprinciples are thesame.

    .. The questions become:

    Where are we? Where do we want to go?

    How do we get there? and

    What happens as we do?

    .. Situation Analysis and Problem

    http://www.scn.org/cmp/modules/mnt-4.htmhttp://www.scn.org/cmp/modules/mnt-4.htmhttp://www.scn.org/cmp/modules/mnt-4.htm
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    14/117

    Definition:..

    This asks the question, "Where are we?" (What do we have?)... Situation analysis is a process through which the general

    characteristics and problems of the community are identified.It involves the identification and definition of thecharacteristics and problems specific to particular categoriesof people in the community.

    .

    These could be

    people withdisabilities, women,youth, peasants,traders and artisans.

    ..

    Situation analysis is done throughcollecting information necessary tounderstand the community as a wholeand individuals within the community.

    .

    Information should be collected on whathappened in the past, what is currentlyhappening, and what is expected to happenin the future, based on the community'sexperiences.

    .. Information necessary to understand the community includes, among others:

    Population characteristics (eg sex, age, tribe, religion and family sizes); Political and administrative structures (eg community committees and local

    councils); Economic activities (including agriculture, trade and fishing); Cultural traditions (eg inheritance and the clan system), transitions

    (eg marriages, funeral rites), and rites of passage (eg circumcision); On-going projects like those of sub-county, district, central Government,

    non Governmental organizations (NGOs), and community based organizations(CBOs);

    Socio-economic infrastructure or communal facilities, (eg schools, health units, and access roads); and

    Community organizations (eg savings and credit groups, women groups, self-help groups and burial groups), their functions and activities.

    .. Information for situation analysis and problemdefinition should be collected with the involvementof the community members using severaltechniques.

    .

    This is to ensure valid, reliableand comprehensiveinformation about thecommunity and its problems.

    .. Some of the following techniques could beused:

    Documents review; Surveys; Discussions with individuals, specific

    groups and the community as a whole; Interviews; Observations; Listening to people; Brainstorming; Informal conversations; Village social, resources, services and

    opportunities;

    . Situation analysis is very importantbefore any attempts to solve theproblem because:

    It provides an opportunity to

    understand the dynamics of the community;

    It helps to clarify social,economic, cultural andpolitical conditions;

    It provides an initialopportunity for people'sparticipation in all projectactivities;

    It enables the definition of community problems and

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    15/117

    Transect walks, maps; and

    Problem tree.

    solutions; and

    It provides informationneeded to determineobjectives, plan andimplement.

    .. Situation analysis should be continuous, in order toprovide additional information during projectimplementation, monitoring and re-planning. Situationanalysis and problem identification should bemonitored to ensure that correct and up datedinformation is always available about the communityand its problems.

    .

    Since monitoring should beintegrated into all aspectsor phases of the process,let us go through eachphase and look at themonitoring concernsassociated with each.

    .. Setting Goals andObjectives:.. Goal setting asks the question, "Where do we want to go?" (What do we want?)...

    Before any attempts to implement a project, theplanners, implementers and beneficiaries shouldset up goals and objectives. See Brainstorm for aparticipatory method to do this.

    ..

    .

    A goal is a general statement of what should be done to solve aproblem. It defines broadly,what is expected out of aproject.

    .. A goal emerges from the problem that needsto be addressed and signals the finaldestination of a project.

    .Objectives are finite sub-sets of a goaland should be specific, in order to beachievable.

    .. The objectives should be " SMART ." They should be:

    Specific: clear about what, where, when, and how the situation will be changed;

    Measurable: able to quantify the targets and benefits; Achievable: able to attain the objectives (knowing the resources and capacities at the disposal of the community ); Realistic: able to obtain the level of change reflected in the objective; and Time bound: stating the time period in which they will each be accomplished.

    .. To achieve the objectives of aproject, it is essential to assess

    the resources available withinthe community and those thatcan be accessed from externalsources. See Revealing HiddenResources .

    .

    The planners, implementers and communitymembers should also identify the constraints they

    may face in executing the project and how theycan overcome them. Based on the extent of theconstraints and positive forces, the implementersmay decide to continue with the project or to dropit.

    .. The goals and objectives provide thebasis for monitoring and evaluating aproject.

    . They are the yardsticks upon which projectsuccess or failure is measured.

    ... Generating Structures and

    Strategies:..

    http://www.scn.org/cmp/modules/brn-int.htmhttp://www.scn.org/cmp/modules/pd-smar.htmhttp://www.scn.org/cmp/modules/pd-smar.htmhttp://www.scn.org/cmp/modules/emp-hid.htmhttp://www.scn.org/cmp/modules/emp-hid.htmhttp://www.scn.org/cmp/modules/emp-hid.htmhttp://www.scn.org/cmp/modules/brn-int.htmhttp://www.scn.org/cmp/modules/pd-smar.htmhttp://www.scn.org/cmp/modules/emp-hid.htmhttp://www.scn.org/cmp/modules/emp-hid.htm
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    16/117

    This aspect asks the third key question , "How do we get there?" (How do we get what we want with what we have ?).

    ..

    The planners andimplementers ( communitiesand their enablers ) shoulddecide on how they are goingto implement a project, whichis the strategy.

    .

    Agreeing on the strategy involves determining all

    items ( inputs ) that are needed to carry out theproject, defining the different groups or individualsand their particular roles they are to play in theproject. These groups and individuals thatundertake particular roles in the project are called"actors ."

    .. Generating the structures and strategies therefore involves:

    Discussing and agreeing on the activities to be undertaken duringimplementation;

    Defining the different actors and outside the community, and their roles; and

    Defining and distributing costs and materials necessary to implement theproject.

    .. After establishing the appropriateness of the decisions,the executive should discuss and agree with all actors onhow the project will be implemented.

    .This is called designinga work plan. (How do weget what we want?).

    .. A work plan is a description of the necessary activities set out in stages, with roughindication of the timing...

    In order to draw a good work plan, the implementers should:

    List all the tasks required to implement a project; Put the tasks in the order in which they will be implemented; Show allocation of the responsibilities to the actors; and

    Give the timing of each activity. ..

    The work plan is a guide to project implementation and abasis for project monitoring. It therefore helps to:

    Finish the project in time; Do the right things in the right order; Identify who will be responsible for what activity;

    and

    Determine when to start project implementation. .. The implementers and planners have to agreeon monitoring indicators. Monitoringindicators are quantitative and qualitativesigns ( criteria ) for measuring or assessing theachievement of project activities andobjectives.

    .

    The indicators will show the extentto which the objectives of everyactivity have been achieved.Monitoring indicators should beexplicit, pertinent and objectivelyverifiable.

    ..

    http://www.scn.org/cmp/modules/mnt-4.htm#SHwhttp://www.scn.org/cmp/key/key-w.htm#Workplanhttp://www.scn.org/cmp/modules/mnt-4.htm#SHwhttp://www.scn.org/cmp/key/key-w.htm#Workplan
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    17/117

    Monitoring Indicators are of four types, namely;

    Input indicators: describe what goes on in the project (eg number of bricks brought on site and amount of money spent);

    Output indicators: describe the project activity (eg number of classrooms

    built); Outcome indicators: describe the product of the activity (eg number of pupils attending the school); and

    Impact indicators: measure change in conditions of the community (eg reduced illiteracy in the community).

    .. Writing down the structures and strategieshelps in project monitoring because theyspecify what will be done during projectimplementation.

    .

    Planning must indicate what shouldbe monitored, who should monitor,and how monitoring should beundertaken.

    ... Implementation:.. Monitoring implementation asks the fourth key question " What happens when we do ?".. Implementation is the stage where all theplanned activities are put into action. Beforethe implementation of a project, theimplementers ( spearheaded by the project committee or executive ) should identifytheir strength and weaknesses ( internal forces ), opportunities and threats ( external

    forces ).

    .

    The strength and opportunities arepositive forces that should beexploited to efficiently implement aproject. The weaknesses and threatsare hindrances that can hamper project implementation. Theimplementers should ensure that they

    devise means of overcoming them... Monitoring is important at thisimplementation phase to ensure thatthe project is implemented as per theschedule. This is a continuous processthat should be put in place beforeproject implementation starts.

    ..

    .

    As such, the monitoring activities shouldappear on the work plan and shouldinvolve all stake holders. If activities arenot going on well, arrangements should bemade to identify the problem so that theycan be corrected.

    ..

    Monitoring is also important to ensure that activitiesare implemented as planned. This helps theimplementers to measure how well they are

    achieving their targets. This is based on theunderstanding that the process through which aproject is implemented has a lot of effect on its use,operation and maintenance.

    .

    Therefore implementation of the project on target is notsatisfactory hence a need for implementers to askthemselves and answer thequestion, "How well do weget there?" (What happenswhen we do?).

    . .. Summary of theRelationship:.. The above illustrates the close relationship between monitoring, planning andimplementation. It demonstrates that:

    Planning describes ways which implementation and monitoring should bedone;

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    18/117

    Implementation and monitoring are guided by the project work plan; and

    Monitoring provides information for project planning and implementation. .. There is a close and mutuallyreinforcing ( supportive ) relationshipbetween planning, implementationand monitoring.

    .

    One of the three cannot be done in isolationfrom the other two, and when doing one of the three, the planners and implementershave to cater for the others.

    Beyond Monitoring; EvaluationEvaluating Achievements

    .. The Meaning of Evaluation:Evaluation is a process of judging value onwhat a project or program has achievedparticularly in relation to activities plannedand overall objectives.

    .

    It involves value judgment andhence it is different from monitoring(which is observation and reporting of observations )., Text

    .. Purpose of

    Evaluation:Evaluation is important toidentify the constraints or bottlenecks that hinder theproject in achieving itsobjectives. Solutions to theconstraints can then beidentified and implemented.

    .

    Assessing the benefits and costs that accrue to theintended direct and indirect beneficiaries of theproject. If the project implemented is for example, theprotection of a spring, evaluation highlights thepeople who fetch and use water and the peoplewhose land is wasted and whose crops are destroyedduring the process of water collection.

    ..

    Drawing lessons from the projectimplementation experience and using thelessons in re-planning of projects in that

    community and elsewhere; and

    .

    Providing a clear picture of the extentto which the intended objectives of the activities and project have been

    realized... The Process of Evaluation:.. Evaluation canand should bedone: (a)before, (b)during, and (c)after implementation.

    . Before project implementation, evaluation is needed in order to:

    Assess the possible consequences of the plannedproject(s) to the people in the community over a period of time;

    Make a final decision on what project alternative should beimplemented; and

    Assist in making decisions on how the project will be

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    19/117

    implemented. ..

    During project implementation:Evaluation should be a continuousprocess and should take place inall project implementationactivities.

    .

    This enables the project planners andimplementers to progressively review theproject strategies according to the changingcircumstances in order to attain the desiredactivity and project objectives.

    ..

    After project implementation: This is to retrace the project planning andimplementationprocess, and results after project implementation. This further helps in:

    Identifying constraints or bottlenecks inherent in the implementation phase; Assessing the actual benefits and the number of people who benefited; Providing ideas on the strength of the project, for replication; and

    Providing a clear picture of the extent to which the intended objectivesof the project have been realized.

    Management Informationand Information Management

    How to handle the information that monitoring generates

    ..

    Management information andinformation management aredifferent; management information isa kind of information (the data);information management is a kind of management (the system).

    .

    Information management is the process of analyzing and using information which hasbeen collected and stored in order to enablemanagers ( at all levels ) to make informeddecisions. Management information is theinformation needed in order to makemanagement decisions.

    .. Monitoring providesinformation about whatis going on in theproject.

    .

    This information is collected during the planning andimplementation phases. The information helps to detect if anything is going wrong in the project. Management cantherefore find solutions to ensure success.

    ..

    The Importance of ManagementInformation:.. Management Information is important to:

    Make decisions necessary to improve management of facilities and services;and

    Implement participatory planning, implementation, monitoring and evaluation. .. How to Use Information

    Management:..

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    20/117

    To be able to use information to makemanagement decisions, theinformation should be managed(collected, stored and analyzed).Whereas information management(the process of collecting and storing information) and managementinformation (the information needed to make informed decisions) aredifferent; they always reinforce eachother and cannot be separated in dayto day operations.

    .

    Management information therefore involves:

    determining information needed; collecting and analyzing information;

    storing and retrieving it whenneeded; using it; and

    Disseminating it.

    .. Determining Information Needed for Management: During project planning,management and monitoring, much informationis generated. Some is needed for makingmanagement decisions on spot; other for later

    management decisions.

    .

    A good management informationsystem should therefore assist theproject managers to know theinformation they need to collect,for different management

    decisions at different times...

    Collecting and Analyzing Information for Information Management: Information canbe got from reports of technical people,village books; forms filled by the differentactors, community meetings, interviews,observation and community maps.

    .

    Storing Information: It is important tostore information for further references. Information can be storedin the village book, project reports,forms and in the mind. The major principle in information storage is theease in which it can be retrieved.

    .. Using Information:Information can be used for

    solving communityproblems, determiningresources (amount and nature), soliciting for their support and determiningfuture projects.

    .

    .

    Dissemination or Flow of Information: For informationto be adequately used it needs to be shared withother stake holders or users. The other stake holderscan also use this information for their managementdecisions and they can help the one collectinginformation to draw meaning and use out of it for management purposes.

    .. Information should be sharedbetween the village, parish, sub-county, district, national office,NGOs and the donor.

    .

    Management information is part and parcel of monitoring because such information is gotduring monitoring and helps in the planning andimplementation of monitoring activities.

    .. Whether it is from the staff or stakeholders, one of the most effective ways of gettinguseful monitoring information is through the Annual Review . Although it is describedin its role of getting participatory management information, it is equally applicable inobtaining monitoring information.

    http://www.scn.org/cmp/modules/pm-ar.htmhttp://www.scn.org/cmp/modules/pm-ar.htm
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    21/117

    Participation in Project MonitoringThe Roles of Stakeholders

    All stake holders have a stake in knowing how well things are going

    .. Monitoring is a vitalmanagement & implementationrole that cannot be left to onlyone stake holder.

    .As many individuals and institutions as possiblethat have any interest in the project, at all levels,should participate in monitoring.

    .. As with community participation andparticipatory management, participation inmonitoring does not happened spontaneously.

    ...The persons whom you want toparticipate must be encouragedand trained to participate.

    .. Advantages of Participation:..

    The advantages of participation in monitoring include: (a) a commonundertaking, (b) enhancing accountability, (c) better decisions, (d)performance improvement, (e) improved design, and (f) more information.

    ..

    Common Understanding of Problems andIdentification of Solutions: Participative monitoringhelps stake holders to get a shared understandingof the problems facing the community or project(their causes, magnitude, effects and implications ).

    .

    This facilitates theidentification of solutions.These solutions are more likelyto be appropriate because they

    are derived from a currentsituation...

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    22/117

    Benefits the Target Groups andEnhances Accountability:Participation in monitoringensures that the people to whichthe project was intended are theones benefiting from it.

    .

    It increases the awareness of people's rights,which elicits their participation in guardingagainst project resource misappropriation.Guarding against resource misappropriationmakes project implementation less expensive.

    ..

    Making AppropriateDecisions: Monitoringprovides informationnecessary in makingmanagementdecisions.

    .

    When many people participate in monitoring it means thatthey have participated in providing managementinformation and contributed to decision making. Thedecisions from this are more likely to be acceptable andrelevant to the majority of the population. This makeshuman and resource mobilization for projectimplementation easier.

    .. Performance Improvement During Monitoring, if a performancedeviation is discovered solutions can be devised. To findappropriate decisions that can be implemented requires theparticipation of those people who will put the solution intopractice.

    .

    Thereforeparticipation inmonitoring can helpimprove projectperformance.

    ..

    Design of Projects: The information generated duringproject monitoring helps in re-designing projects inthat locality to make them more acceptable.

    .

    The lessons learned canalso be used in the designof similar projectselsewhere.

    ..

    Collection of Information: If manypeople participate in monitoring theyare more likely to come up with moreaccurate information. This is because,information that is omitted by oneparty, can be collected by the other.

    .

    Each stake holder is putting varyingemphasis on the different aspects of theproject using different methods.Alternatively, one party knowing that the

    information they are collecting will beverified, forestalls deliberate wrongreporting.

    .. Challenges of Participation inMonitoring:.. Whereas participation in monitoring has anumber of virtues, it is likely to face anumber of challenges.

    .The challenges include: (a) high costs,(b) variations in information, and (c)inaccuracies.

    ..

    High Initial Costs: Participation inmonitoring requires manyresources (eg time, transport and

    performance-related allowances).

    .

    It is a demanding process that can over-stretchvolunteer spirit at community level and financialresources at district and national levels.Therefore it must be simple and focussed to vitalelements.

    .. Quantity and Variety of Information: Monitoring requirescollection, documentation andsharing of a wide range of information.

    .

    This requires many skills that are lacking in thecommunities. It therefore necessitates muchtime and resources for capacity building. It alsorisks wrong reporting.

    .. Inaccuracy of Information: Some stake holders, from thecommunity to the national level, may intentionally providewrong information to depict better performance and

    . To counteract wrong or incorrect reporting needssensitization and

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    23/117

    outputs or because of community or project differences. consensus building thatis difficult to attain... The advantages of participation inmonitoring are evidently more

    than the challenges.

    .It is therefore necessary to encourage andsupport participatory monitoring as we devise

    means to counteract the challenges.

    Levels of MonitoringCommunity, District, National, Donor

    Monitoring methods differ at each level, and complement each other ..

    There is no universal vocabulary for varying levels of government andadministration from the community levelto the national level. Terminology variesfrom country to country. I can not,therefore, use a set of terms that can beapplied in many countries, although theprinciples and methods of communityempowerment are universally similar ( withminor variations between countries ). Sincethese training modules were mainlydeveloped in Uganda, I am using theterminology of Uganda.

    .

    When Museveni came to power, theyranged from Resistance Council LevelOne ( community or village ) up toResistance Council Level Five (District).More recently, Uganda reverted to aformer terminology with colonialvestiges: 1 = village, 2 = parish, 3 = sub-county, 4 = county and 5 = district. Theprecise terms are not important here;what is important is that there aremonitoring roles that range from thevillage to the national level. Use

    whatever terms are applicable to your situation.

    . ..

    Monitoring should be carried out by allstake holders at all levels. Each level,however, has specific objectives for monitoring, methods and therefore roles.

    .

    For monitoring to be effective, there isneed for a mechanism of givingfeedback to all people involved at alllevels ( community, district, national and donor ).

    Monitoring at Community Level:.. Community level is where implementation and utilization

    of the benefits of the project take place. In most cases itis the village and parish level. At this level, the major

    . The specific objectivesfor monitoring at thislevel therefore include,

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    24/117

    purpose of monitoring is to improve the implementationand management of projects. The interest of thecommunity as a whole in monitoring school construction,for example, is to ensure that the construction of theschool ( an output ) is being done as planned.

    (a) ensuring that theprojects are implementedon time, (b) that they areof good quality and (c)that the project inputsare well utilized.

    .. Monitoring at this level involves: Identifying acommunity project. This should be identifiedin a participatory manner to reflect thecommunity needs and stimulate people'sinterest in its implementation andmonitoring.

    .

    If the process of project identificationis not well done and does not reflectcommunity interests, it is likely thatthe communities will not participatein the monitoring of theimplementation activities;

    .. Identifying the team(s) tospearhead the monitoringof the project in thecommunity.

    .

    The roles of each team, how they should carry out themonitoring process, the use and sharing of informationgenerated with other groups within and without thecommunity, should be specified and explained;

    .. Design a work plan that guidesproject monitoring. The work planshould specify the activities in theorder that they will be executed andthe individuals to execute them.

    .

    This helps the people monitoring to know theactivities that should be carried out byparticular individuals in a given period of time.If the activities are not carried out, the peoplemonitoring get guidance in coming up withsolution(s);

    .. Determine the major activities fromthe work plan. Whereas allactivities in the work plan arenecessary and should be

    monitored, it is useful to identifythe major activities on the basis of which objectives and indicatorswould be set.

    .

    For example if the preparatory activities in aschool construction project include,community mobilization, borrowing of hoesfrom the neighboring village, digging of the soiland fetching of water for brick making, themajor activity summarizing all the sub-activitiescould be brick making.

    .. Determine the indicatorsfor each activity objective.The indicators help theteam monitoring to tell howfar they have gone inachieving the objectives of each activity. In our example, one indicator could be the number of bricks made. and

    .

    Compare what is happening with what was plannedshould be done in the process to tell whether theproject is on schedule and as planned. The monitorsshould check at the indicators to measure how far theyhave reached in achieving the objectives. This shouldinvolve looking at the quality of work to ensure that it isgood. The monitoring team may need to involve atechnical person like a local artisan or a technicianfrom the district to ascertain the quality of the project(if it is of a construction ).

    ..

    The monitoring team shouldthen agree on how often theyshould visit the project siteas a means of verifying whatis taking place.

    .

    For a community project, to avoid big deviations fromthe work plan, monitoring visits should be carried outat least once a week. During the project visits, theteam should look at what is happening ( observe ) andtalk to every body who is involved in the project;

    ..

    For each activity, themonitoring team shouldidentify the objectives.

    .For example the objective of brick making as an activityduring the school construction project could be; to maketen thousand bricks by the end of February.

    ..

    Whenever a monitoring visit is carriedout, those monitoring should write down

    . The findings from the monitoring visitsshould be discussed with other members

    http://www.scn.org/cmp/key/key-w.htm#Workplanhttp://www.scn.org/cmp/key/key-w.htm#Workplanhttp://www.scn.org/cmp/key/key-w.htm#Workplan
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    25/117

    what their findings. They can use a formattached in the annex or agree on anyother reporting format that captures thefindings of the exercise in relation to thework plan.

    of the implementation committee. Themonitoring and implementation teamsshould use the information collected todetect and solve the problems facing theproject.

    ..

    The monitoring and implementationteams should store the informationwell and use it for future actions andto inform other stake holders.

    .At each site there should be a file in whichcopies of monitoring reports and other documents related to the project are kept.

    .. Monitoring at District and Sub-CountyLevel:..

    The district and sub-countyofficials should get informationfrom the community monitoring(monitoring performance in

    relation to turning the inputsinto outputs).

    .

    They should also monitor the outcome of theproject ( eg the effect of school construction onthe enrolment levels ). The district should alsomonitor the increase in strength, capacity and

    power of the target community to stimulate itsown development... The objectives therefore include: supporting theimprovement in project performance andmeasuring the applicability of the way the projectwas designed in relation to communitystrengthening.

    .

    The methods for monitoringthat can be adopted at thislevel include (a) routinemonitoring and (b) qualitativesupport.

    .. Routine Monitoring and SupervisorySupport: This requires the DistrictProject Coordinator, Community

    Development Assistant, other technicalstaff and politicians at the district andsub-county to visit the project sites toascertain what is happening in relationto what was planned.

    .

    A copy of the work plan and communitymonitoring reports should be kept in theproject site file. This will help whomever

    wants to compare progress with the workplan and get comments of the monitoringteam to do so without necessarily tracingthe members of the monitoring team whomay not be readily available.

    ..

    During routine monitoring,discussions should be made with allthe people involved in theimplementation and monitoring of theproject. Look at the manner in whicheach team performs its duties (as a

    means of verifying the increase incommunity capacity).

    .

    Make and record comments about good andbad elements in the project. Recommendsolutions showing who should undertakethem, with financial, time and the negativeeffects that may accrue to the project if theyare not taken. A copy of the commentsshould be left in the project site file/bookand the other discussed and filed at thedistrict.

    ..

    The sub-counties and districts shouldorganize discussions of projectprogress at least once a month.

    .

    Also file and submit a project progressreport as part of the routine monthlyreporting to the district and national officerespectively.

    .. The major issues to look at during the district and sub-county routine monitoringinclude:

    Levels of actual community, sub-county, district and donor contributions(including funds, materials, time and expertise );

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    26/117

    Timely implementation and quality of projects; Appropriate use and accountability of community and donor resources; Level of community involvement in the project; Commitment and performance of community committees; and

    Timely use of information generated through the community routinemonitoring.

    .. Qualitative Enquiry:The district, inliaison with the sub-county, shouldorganize FocusGroup Discussions,Key InformantInterviews, andCommunity Group

    Discussions, withcommunities andother keyinformants at leasttwice a year.

    .

    These enquiries would help the district to:

    Verify some of the information collected by thecommunity and district;

    Get information on issues that are not captured duringthe routine monitoring;

    Discuss on spot with the communities on possiblesolutions to problems hindering project performance;and

    Discuss with the community, learn from them, explaincapacity building issues.

    ..

    These qualitative enquiries should be simple and involve thecommunity members to reduce the costs and enable thecommunity members to learn how to conduct them as ameans of community strengthening. The outputs should beanalyzed in relation to the community and routine districtfindings and should also be used to discuss solutions.

    .

    Findings should bewell documented andshared at the nationallevel in order to assistnational levelmanagementinformation.

    .. The major issues during the qualitative enquiries include:

    Establishing whether the projects were the community priorities (also the appropriateness of the project identification );

    Community members' knowledge and appreciation of the project methodology, and their willingness to participate and contribute to the project activities;

    Effectiveness of the community members during project monitoring; Opinions of community members on quality and use of resources

    (accountability); Skills ( eg decision making capacity and negotiation skills) , acquired by specific

    categories of people in the community during project implementation; and

    Community knowledge of their rights and obligations. .. Before qualitative enquiries, each district and sub-countyshould identify and discuss any management informationgaps to form periodic themes.

    .Specific designswould also be agreedupon at this stage.

    Monitoring at National and Donor Level:.. Monitoring at the national and donor level is to find out if project inputs are well used(desired outputs are being realized ), project design is appropriate, and for learning.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    27/117

    .. The objectives of monitoring at this level include:

    To ensure that the inputs for are efficiently and effectively utilized. That the planned activities are being realized;

    To measure the applicability of the methodology to community strengthening;and

    To draw lessons from the project intervention for future projects in the country and beyond. The lessons will provide the basis for project methodologyreplication.

    .. The methods for monitoring at this level include: (a) routine monitoring,

    (b) action research and qualitative enquiries, and (c) surveys...

    Routine Monitoring: Routine monitoring should bedone on a quarterly basis by project staff and theministry's planning unit to check on the levels of activities and objectives. Since the national level getsinformation about the projects and activities throughmonthly district progress reports, national routinemonitoring should be limited in scope.

    .

    It should cover aspects thatappear contradictory,problematic, verysatisfactory or unique.These would enable thenational office to provide thenecessary support and drawlessons.

    ..

    Action Research and Qualitative Enquiries:The national office should carry out in-depth qualitative enquiries once a year.

    .

    These should focus on drawinglessons from the project design andimplementation experiences for replication.

    .. Therefore, the major issues at this level include:

    The contribution of community projects on national and donor priorities; Satisfaction derived by the communities (levels of service and facility

    utilization); Capacity of the community to operate and maintain the services and facilities; Ability of the community members to pay for the services and facilities; Appropriateness of the project methodology in light of national policies; Leadership, authority and confidence within communities; Capacity building and functioning of Local Governments and District

    personnel; Representation (especially of women) in the community decision making

    process; Replication of experiences in other projects and training institutions; Capacity building of existing individuals and institutions; and

    The functioning of the monitoring and management information systems. ..

    Surveys: Surveys should also be conducted to gather quantifiable dataand supplement the information generated through other methods. Thesecan be contracted to research institutions such as at universities.

    .. Monitoring Issues and Procedures at DifferentLevels:

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    28/117

    .. Monitoring issues and procedures are describedhere for each level. This is to emphasize that thestake holders should spearhead but not exclusivelycarry out all monitoring. In practice, the issues andprocedures of the different stake holders overlap.

    Each stake holder should support others inmonitoring responsibilities.

    .

    Issues mentioned here are notexhaustive but indicate whatshould be done. Each levelshould therefore collectinformation on any other

    issues deemed relevant to theparticular situations.

    .. These are presented as three tables (1) community level, (2) district

    level, and (3) national level, indicating the key issues at each level... Community Level:..

    At the community level the three main actors who have a stake

    in the community strengthening intervention are the: CBO Executive or Implementing Committee (CIC) of the

    community project; Community mobilizers; and

    Parish Development Committee (PDC).

    .

    The following tablelooks at the mainissues of interest,monitoringindicators, meansof observing,frequency, andsuggestedmonitoringprocedures, for each of these threestake holders.

    ..

    Stake Holder Issue MonitoringIndicator Means of Observing Freq.

    MonitoringProcedure

    ExecutiveCommittee

    TimelyImplementationof Projects

    Number of projectactivitiesimplemented intime.

    Routineproject visits Weekly

    Members useroutine monitoringform

    Appropriate useof projectresources

    No materialsmisused

    Routineproject visits.Project qualitychecks

    Weekly

    Members useroutine monitoringformCheck quality usingthe technician'sguidelines

    Proper collection and

    storage of projectinformation

    Percentage of projects with

    project site files;number of reportsin site files

    Reviewing the

    project sitefiles

    Weekly

    Members of theproject committee

    review the projectsite file, reportsand comments

    CommunityMobilizers

    Realistic projectimplementationwork plan

    Number of projectwork plans withwell sequencedactivities

    Compareactivities inthe work planwith how theyareimplemented

    Monthly

    Mobilizers (1)review sequence of project work planswith a technicalperson, and (2)conduct monthlyproject site visits

    Communityparticipation in

    projectactivities

    Number of persons

    performing their roles

    Number of activities.

    Amount of resources

    Monthly Project site visits;Discussions with

    people about their contributions.

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    29/117

    provided bythecommunity

    Dev'tCommittee

    Accountabilityof ProjectResources

    Percentage of resourcesaccounted for

    Resourceaccountabilityform

    QuarterlyPDC members useproject resourceaccountability form

    . .. Sub-County and DistrictLevel:..

    At the district and subdistrict (more than onecommunity) level, themain actors who have astake in the communitystrengtheningintervention are the:

    .

    Community Development Assistants (CDAs); Planning Unit; and

    District Project Coordinator, (DPC) who, if aministry official, is usually a CommunityDevelopment officer (CDO), or an NGO

    equivalent. .. The following table looks at the main issues of interest, monitoring indicators, meansof observing, frequency, and suggested monitoring procedures, for each of these threestake holders...

    Stake Holder Issue

    MonitoringIndicator

    Means of Verification Freq.

    MonitoringProcedure

    CommunityDevelopmentAssistant

    Functioning of mobilizers andcommunitycommittees

    Number of committeesperforming their roles

    Review of eachcommittee'sperformance

    Twicea year

    CDA during thequalitative enquiriesdetermine theperformance of each

    committee

    DistrictProjectCoordinator

    and

    PlanningUnit

    Identification of projects that fallin the districtplan andnationalpriorities

    Number of projects under the district plan

    Review of projectidentificationreports. Projectvisits

    Twicea year

    The planning unit reviewsthe plans from theparishes, to establish if they fall under the districtplan and national priorityareas

    Communityleadersacquisition of community

    managementskills

    Number of villages usingcommunityparticipationin planning

    andimplementingprojects

    Review of projectreports. Focusgroupdiscussions andother qualitativeenquirytechniques.

    Twicea year

    Planning unit conductsqualitative enquiries tofind out if communitiesare participating in projectactivities. District specificprocedures must bedesigned when exercisestake place

    .. National and Donor Level:.. At the national or country level, there are two main stake holders, (1) The ministry or agency that is implementing the intervention or project, and (2) any external national or international donors that are contributing to the intervention or project... Stake Issue Monitoring Means of Freq. Monitoring

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    30/117

    holder Indicator Verification Procedure

    NationalOffice

    and

    Donors

    Communityknowledge of methodology

    Proportion of people aware of the methodology.

    Surveys, focusgroup,discussions, keyinformantinterviews

    AnnuallyAgency or Ministrydesign and conductthe annual studies

    Effectiveness of the projectdesign

    Percentage of projectoutputs attained.Percentage of designaspects appreciated bythe community.

    Review of projectreports, Surveys,Focus GroupDiscussions, KeyInformantInterviews

    AnnuallyAgency or Ministrydesign and conductthe annual studies

    Adaptation of implementationexperiences byother projectsand institutionsin the country

    Proportion of theproject designaspects adapted

    National andinternationaldiscussions

    Annually

    Agency or Ministry conductsmeetings with academicinstitutions and communityprojects to f ind out themethodological aspects thathave been replicated

    .

    Monitoring and ReportingAfter the Observations are Made

    How to report the observations and analysis..

    While this document focuses on reporting of observations made while monitoring, the

    next module, Report Writing , looks more in detail about the writing of reports itself... Reporting is a major activity duringproject monitoring. It is the way inwhich information about theprocess and output of activities,and not just the activities, is sharedbetween the stake holders of theproject.

    .

    In the case of a school construction project,reporting does not end at mentioning thenumber of times the community met to makebricks and build the school walls, but alsomentions the number of bricks and schoolwalls that were constructed plus the processthrough which they were accomplished.

    .. In community projects, reporting is mainly done through two ways: verbal and written.

    .. Verbal Reporting:..

    This is a process where reporting isdone orally. It is the commonest . The community members find it easier andmore effective to communicate to others in

    http://www.scn.org/cmp/modules/rep-int.htmhttp://www.scn.org/cmp/modules/rep-int.htmhttp://www.scn.org/cmp/modules/rep-int.htm
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    31/117

    means of reporting. words...

    The advantages of verbal reporting are:

    The ability for a wider proportion of the community to participate.

    Many community members especially in rural areas are illiterate andcannot write. Those that can write find the writing of reports time andresource consuming which makes them reluctant to document all theinformation acquired during project monitoring.

    Clarity and timely distribution of information. Verbal reporting isalways done immediately after an event. This makes the informationarising out of the process to be relatively valid, reliable and up to-datethan the information that is documented. The people that give thereports, get an opportunity to discuss with the community and getimmediate feedback. This helps in decision making.

    Low cost. Verbal reporting cuts down significantly the time and other resources spent on reporting.

    .. The challenges of verbal reporting include:

    Wrong reporting. Some community members may deliberatelydisseminate wrong information verbally to protect their interests.Verbal reporting is so tempting because a person reporting knows thatno body will disqualify the reports. In other cases the people givingthe information are not given the time to think through the responses.

    Storage, replication and consistency: Since during verbal reportinginformation is neither documented nor recorded, it is very difficult tokeep and retrieve it for further use. This information is only kept in theminds of people who participated in the implementation of the project.This therefore makes it difficult to share the information with peoplebeyond the community especially in instances where those peoplewho know the information cannot or are not willing to reveal it. Theinformation collected is also not likely to be consistent especially incases where past information is needed to generate new data.

    .. Written Reporting:.. During monitoring it is important toreport about the results of activitiesnot just the activities.

    . Write what you observe, along withreviewing reports of technical people.

    .. The advantages of written reports are:

    They provide reliable information for management purposes ( Written reportscan be cross-checked over time with other information to ascertain accuracy );

    They help to provide information from the technical people; and

    The reports that are written are easy to manage. ..

    The challenges of written reports are:

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    32/117

    Day to day writing during project monitoring activities is always ignored;and

    Documentation of reports is very costly both in time and money. .. See Levels of Monitoring for anexplanation of the levels used here. .

    Uganda uses: 1 = village, 2 = parish, 3 =sub-county, 4 = county and 5 = district.

    Reporting Roles of Key Stake Holders: Community level:..

    Project Committees:

    Design and publicize ( in liaison with

    mobilizers ) the project implementation workplan to the, Parish Development Committee,Local Councils and the community;

    Compile and publicize the monthly projectprogress reports to the Parish DevelopmentCommittee, Local Councils at village andparish level and Community DevelopmentAssistant; and

    Keep the project site file ( including thework plans, monitoring reports and any other specific project information ) for eachproject.

    .

    Community Mobilizers:

    Prepare reports aboutvillage level projectidentification processand submit copies tothe Parish DevelopmentCommittee and theCommunityDevelopment Assistant;

    Collect and submitreports about thecommunity and specificindividuals in thecommunity; and

    Submit reports on all

    training conducted inthe community. ..

    Parish Development Committees:

    Give an up-date about projects in the parish to thecommunity in local council meeting;

    Report to community and CDA about resourcesand how they are used in each project;

    Submit an annual report to the CDA on the mainactors in the community projects.

    .

    Local Council One andTwo:

    Documentminutes of council andexecutivemeetings for their managementdecisions anduse by the sub-county, districtand nationalteams.

    .. Sub-County and District Level:

    .. Community Development Assistant:

    Submits a monthly summary of project progress reportsto the district;

    Report on status and functioning of community

    . CommunityDevelopmentOfficer (DistrictCoordinator):

    Submits

    http://www.scn.org/cmp/modules/mon-lev.htmhttp://www.scn.org/cmp/modules/mon-lev.htmhttp://www.scn.org/cmp/modules/mon-lev.htmhttp://www.scn.org/cmp/modules/mon-lev.htm
  • 8/14/2019 Evaluation & Monitoring Detailed Note

    33/117

    mobilizers, project committees and parish developmentcommittees;

    Submits a summary of training conducted by mobilizersand to the mobilizers;

    Submits a report on the main contributors in thecommunity projects to the district.

    a monthlysummaryof districtprogressreports tothenationaloffice.

    .. National Office:

    .. National Coordinator:

    Submits half year progressreports in the country to thenational steering committee,

    ministry and donors; Prepares up-dates of project

    activities and outputs andsubmits copies to eachdistrict, who in turn publicizethe report to the sub- countiesand parishes.

    .

    Submits SWOT ( Strengths,Weaknesses, Opportunities, Threats )reports twice a year on the strengthand weaknesses of the projectdesign to the ministry and donors.Include bad and goodimplementation experiences. May bepart of the six month report;

    Compiles and publicizes survey andqualitative enquiry findingswhenever such studies areconducted.

    EvaluationA Beginners Guide

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    34/117

    Dinosaurs would have had a better chance to surviveif they had evaluated climate changes a bit more carefullyIntroduction

    This document aims to present a user-friendly approach to the process of evaluation .It is particularly targeted at those involved in programs working towards theintroduction of human rights concepts and values in educational curricula andteaching practices, who are initiating an evaluation for the first time. It contains

    practical suggestions on how to effectively organize the evaluation of such programsin order to learn from the work implemented so far.

    Often the demands put upon human rights activists and the urgency to act againsthuman rights violations means that no time is put aside to systematically evaluateHuman Rights Education (HRE) projects and their effects.However, an evaluation is a positive activity for several reasons :

    It is a chance for practitioners to test for themselves that their efforts areworking and are worthwhile as well as identifying weaknesses to be remedied.

    It is an essential step towards improving effectiveness. It gives the program credibility in the eyes of those directly and indirectly

    involved (such as students and funders ) It is essential for good planning andgoal setting

    It can raise morale, motivate people and increase awareness of the importanceof the work being carried out. Or it can help solve internal disputes in anobjective and professional way.

    It allows others engaged in similar program work, in your country or abroad,

    to benefit from previous experiences.

    What is evaluation?

    Evaluation is a way of reflecting on the work that has been done and the resultsachieved. A well thought out evaluation will help support and develop further any

    program, that is why evaluation should be an integrated component of any project or program plan and work implementation.

    The process of evaluating is based on evidence (data), which is systematicallycollected from those involved in the program by various methods such as surveys and

    interviews, and from the analysis of documents and background information. Theanalysis and interpretation of this data enables practitioners to evaluate the programconcerned.

    For example, it allows you to ask and answer questions like:

    Is the program achieving its goals? Does the program have an effect? Is the effect different from the set

    goals? If so, why? Is the program using its resources (human and financial) effectively?

    We have identified three main goals for carrying out an evaluation :

  • 8/14/2019 Evaluation & Monitoring Detailed Note

    35/117

    Community impact centred when the goal of the evaluation is to look at the impactthe HRE program is having in the community or the overall society. For example,looking at a human rights training program for the police and evaluating if suchtraining is having an impact in the community.

    Organization centred when the goal of the evaluation is to look at the life of theorganization. This kind of evaluation would ask questions such as: Is the organizationfunctioning well? How is the organization perceived by the public or by theauthorities?

    Learner centred when the goal of the evaluation is to look at the personaldevelopment of the learner. This means that the evaluation will be assessing if thelearner is achieving knowledge about human rights, and if s/he is also acquiring other

    benefits such self worth, confidence, empowerment and commitment to human rights.Although it might be helpful to focus on only one of these goals initially, it isimportant to be aware that some overlap will take place. For example when evaluatingthe community impact of a project you will inevitably have to look at the efficiency of the organization concerned.

    An evaluation can be carried out by:

    Someone who works or belongs to the organization (internal evaluation ) By someone who does not work in the organization (external evaluation ) By a mixture of the two - a team of people working for the organization and

    outsiders (a combination of external and internal evaluation ).

    Depending on the circumstances in which the evaluation is to be carried out, thereasons why you are carrying it out and the resources available, you can choose one of these three ways of carrying out an evaluation .

    Regardless of the general goal you choose, or if it is an external or internal one,evaluation s should always be:

    action oriented -- intended to lead to better practices and policies. Evaluationreports should include recommendations for improvement.

    carried out as much as it is possible with a participatory approach -- those

    affected by the evaluation should be allowed when relevant to comment onthe scope of the evaluation and the evaluation plan.

    take into account the internal as well as the external factors which may beinfluencing the work and its outcomes

    Planning an evaluation

    Measurable objectives and evaluation should be included in the plan from the start.Evaluation should no


Recommended