+ All Categories
Home > Documents > m & E test 1

m & E test 1

Date post: 27-Nov-2015
Category:
Upload: tkurasa
View: 9 times
Download: 0 times
Share this document with a friend
Description:
d
Popular Tags:
9
1. Evaluation is a learning management tool but differs materially from monitoring. Project monitoring is undertaken at the implementation stage while evaluation is generally preferred when a project is complete. The monitoring reports provide the data base for the "evaluation" but evaluation cannot contribute directly to monitoring. The evaluation studies are more comprehensive in nature, covering all aspects of the projects, whereas monitoring provides information mainly to assess and help maintain or accelerate the progress of implementation. Some key differences between M and E functions are summarized below: Monitoring (M) Evaluation(E) Keeps track of daily activities a continuous function. Takes long range view through indepth study - a one time function Accepts objectives, targets and norms stipulated in the project document Questions Pertinence and validity of project objectives / targets Checks progress towards output targets Measures performance in terms of objectives Stresses conversion of inputs to outputs Emphasizes achievement of overall objectives Reports on current progress at short intervals for immediate corrective actions Provides an indepth assessment of performance for future feedback 2.
Transcript

1. Evaluation is a learning management tool but differs materially from monitoring. Project monitoring is undertaken at the implementation stage while evaluation is generally preferred when a project is complete. The monitoring reports provide the data base for the "evaluation" but evaluation cannot contribute directly to monitoring. The evaluation studies are more comprehensive in nature, covering all aspects of the projects, whereas monitoring provides information mainly to assess and help maintain or accelerate the progress of implementation. Some key differences between M and E functions are summarized below:  

 

Monitoring (M) Evaluation(E)

Keeps track of daily activities a continuous function.

Takes long range view through indepth study - a one time function

Accepts objectives, targets and norms stipulated in the project document

Questions Pertinence and validity of project objectives / targets

Checks progress towards output targets

Measures performance in terms of objectives

Stresses conversion of inputs to outputs

Emphasizes achievement of overall objectives

Reports on current progress at short intervals for immediate corrective actions

Provides an indepth assessment of performance for future feedback

2.

3 Steps to Developing an M&E Work Plan in setting up a VCT centre in a rural area

1.Identify program goals and objectives. Set up a VCT centre in a rural area create awareness and HIV prevalence

2.Determine M&E questions, indicators, and their feasibility. Questions

Was the activity carried out as planned? Did it reach its target market? Did any changes in exposure to HIV infection result? How will the risk behaviors of the target population be affected?

Indicators VCT sites set up in the past year clinicians trained in syndromic management of STIs in the last 6 months children provided with psychosocial counseling in the past 3 months HIV-infected pregnant women started on Arvs

Feasibility

Will the indicators be able to measure changes over time (sensitivity)?

What resources (human and financial) do the indicators require? (affordable, feasible)

3.Determine M&E methodology for monitoring the process and evaluating the effects.

Qualitative Monitoring- answers questions about how well the program elements are being carried out i.e changes in people’s attitudes toward abstinence, stigma, fidelity, care and support, or condoms

4.Resolve implementation issues: Who will conduct the monitoring and evaluation?

How will existing M&E data and data from past evaluation studies be used? Internal or external evaluators Once the data collection methods are established, it is important to clearly state who will be responsible for each activity

5.Identify internal and external M&E resources and capacity.

funds and experienced personnel who can assist in planning and conducting monitoring and evaluation activities

6.Develop an M&E Work Plan matrix and timeline.

presenting the inputs, outputs, outcomes, and impacts—and their corresponding activities

7.Develop plan to disseminate and use evaluation findings.

monitoring and evaluation results will be used, translated into program policy language, disseminated to relevant stakeholders and decision-makers, and used for ongoing program refinement.

4a Impact indicators: Long-term results

Measure the quality and quantity of long-term results generated by programme outputs (e.g. reduced incidence of HIV and related diseases, reduced mortality).

Outcome indicators: Medium-term results

Measure the intermediate results generated by programme outputs. They often correspond to any change in people’s behaviour as a result of programme, e.g. Percentage of young women and men aged 15–24 who have had sexual intercourse before the age of 15.

Output indicators: Short-term results

Measure the quantity, quality, and timeliness of the products — goods or services — that are the result of an activity/ project/programme. E.g Number of male and female condoms distributed to

end users in the last 12 months

Input indicators:

Input and process indicators are used for monitoring whether appropriate Hiv policies have been issued, and adequate resource inputs have been allocated and implemented. E.g if enough funds and trained personnel have been assigned for an hiv project

4b. How to develop indicatorsTo develop indicators, you can follow these steps:Step 1:Identify the problem situation (baseline data, needs assessment) your project is addressing.

Step 2:Develop a vision on what the objectives of your project are. Based on these project objectives, you should work out which data could give you an indication of your having achieved what you were attempting to. For instance, if you are working in the health sector,possible questions could be:Has the infant mortality rate gone down?Do fewer women die during childbirth?Has the HIV-infection rate been reduced?

Step 3:Now you should identify ways in which to achieve your objectives. This exercise will lead you to the progress indicators. If you want success to be attained through community mobilisation, then your process indicators might include the number of community health workers that have been trained.

Step 4:The next step would be to define indicators for effectiveness . If you have a project that aims to increase the secondary school pass rate by training teachers, you have to find out if your project has achieved its objective. For example, you could circulate questionnaires among students in order to establish if they are satisfied with the quality of their teachers. It would be best to compare this data with data gathered before project implementation (pre-test – post-test model).

Step 5:Last but not least, you should develop indicators that measure the efficiency of the project. Here you can set indicators such as whether the envisaged workshops ran within the planned timeframe, and whether the costs for these workshops were kept to a minimum and were within budget

4c.

Valid: Measures the effect it is supposed to measure

Reliable: Gives same result if measured in the same way

Precise: Is operationally defined so people are clear about what they are measuring

Timely: Can be measured at an interval that is appropriate to the level of change expected

Comparable: Can be compared across different target groups or project approaches

Valid–For example, if we use the indicator “knows at least three modern methods of family planning,” this will give an indication over time of a changing level of knowledge. If, however, we are

interested in whether people’s interest in using family planning is changing, this would NOT be a valid indicator.

Reliable–If we use the same indicator, it should be reliable when asked by different people during different survey rounds. However, if women (or interviewers) are not clear about the definition of “modern,” the validity may be compromised since different people may count different methods as modern.

Precise–As mentioned above, if we can clearly define our indicator by including a list of modern FP methods that are acceptable answers to be counted among the three, then the indicator is precise enough to determine whether the respondent can be counted among those who know three modern FP methods.

Timely–With a concerted education effort, we certainly expect to be able to see change in this indicator within a relatively short time. However, if for example we have a 2–3-year project and want to measure change in family size, the indicator (family size decreased) will not be observable within the life of the project.

Comparable–Our indicator on knowledge of three modern family planning methods is easily comparable across different groups. For example, it would be easy to compare whether husbands’ and wives’ knowledge levels are the same, or whether couples who received counseling vs. those who did not had the same knowledge level. In contrast, if we were to choose an indicator that is intervention-specific, such as those who receive counseling and know at least three modern methods, we could use this on the subgroup of people who received counseling but could not use this indicator with the population at large.

5. Most significant change is a story-based, qualitative and participatory approach to monitoring and evaluationthat involves the collection of significant change (SC) stories emanating from the field level, and the systematicselection of the most significant of these. These stories can be used for different domains of change,program evaluation, organizational review and evaluation and building community ownership through participatoryevaluation. The advantage of this approach is that it is participatory, involves multiple stakeholdersand does not use pre-set indicators and therefore can capture unexpected and unanticipated changes; however,it can be very time consuming.

b. LFA is a systematic planning procedure for complete project cycle management It is a problem solving approach which takes into account the views of all stakeholders It also agrees on the criteria for project success and lists the major assumptions

The LOGFRAME MATRIX is a participatory Planning, Monitoring & Evaluation tool whose power depends on the degree to which it incorporates the full range of views of intended beneficiaries and others who have a stake in the programme design. It is a tool for summarizing the key features of a programme and is best used to help programme designers and stakeholders

LOGFRAME MATRIX SERVES THE FOLLOWING FUNCTIONS

A tool for planning a logical set of interventionsA tool for appraising a Programme document A concise summary of the Programme A tool for monitoring progress made with regard to delivery of outputs and activitiesA tool for evaluating impact of Programme outputs, e.e. progress in achieving purpose and goal.

6. Questionnaires

Provides a standardized approach to obtaining information on a wide range of topics from a large number or diversity of stakeholders (usually employing sampling techniques) to obtain information on their attitudes, beliefs, opinions, perceptions, level of satisfaction

Merits

Good for gathering descriptive data on a wide range of topics quickly at relatively low cost.Easy to analyse. Gives anonymity to respondents

Demerits

Self-reporting may lead to biased reporting.Data may provide a general picture but may lack depth.May not provide adequate information on context.Subject to sampling bias

InterviewsSolicit person-to-person responses to pre-determined questions designed to obtain in-depth information about a person’s impressions or experiences, or to learn more about their answers to questionnaires or surveys.

MeritsFacilitates fuller coverage, range and depth of information of a topic

DemeritsCan be time consuming.Can be difficult to analyse. Can be costly.Potential for interviewer to bias client's responses.

Expert Panels

A peer review, or reference group, composed of external experts to provide in-put on technical or other substance topics covered by the evaluation.

Merits

Adds credibility.Can serve as added (expert) source of information that can provide greater depth.Can verify or substantiate information and results in topic area.

Focus GroupsA small group (6 to 8 people) are interviewed together to explore in-depth stakeholder opinions, similar or divergent points of view, or judgements about a development initiative or policy, as well as information about their behaviours, understanding and perceptions of an initiative or to collect information around tangible and non-tangible changes resulting from an initiative.

MeritsQuick, reliable way to obtain common impressions from diverse stakeholders.Efficient way to obtain a high degree of range and depth of information in a short time

DemeritsCan be hard to analyse responses.Requires trained facilitator.May be difficult to schedule


Recommended