+ All Categories
Home > Documents > NF_Observing_change2010_net

NF_Observing_change2010_net

Date post: 23-Mar-2016
Category:
Upload: norsk-folkehjelp
View: 214 times
Download: 0 times
Share this document with a friend
Description:
http://www.folkehjelp.no/filestore/NF_Observing_change2010_net.pdf
Popular Tags:
62
Observing Change Results based planning, monitoring and reporting (PMR)
Transcript
Page 1: NF_Observing_change2010_net

Observing Change Results based planning, monitoring and reporting (PMR)

Page 2: NF_Observing_change2010_net

· If you do not measure results, how will you tell success from failure?· If you cannot show success, how can you reward it?· If you cannot reward success, are you rewarding failure?· If you cannot see success, how can you learn from it?· If you cannot see failure, how can you correct it?· If you can show results, you can win support! (adapted from Osborne and Gaebler 1992)

Page 3: NF_Observing_change2010_net

COntent

AbOut nOrwegIAn PeOPle’s AId ............................................................................................4

Chapter 1 ObservIng suCCess And fAIlure ...........................................................................6

Chapter 2 sHArP lAnguAge .......................................................................................................9

Chapter 3 unwrAPPed results ..............................................................................................15

Chapter 4 MOnItOrIng - A frAMe Of MInd .............................................................................25

Checklist results ....................................................................................................................36

Checklist bAselInes ................................................................................................................38

Checklist IndICAtOrs ...............................................................................................................40

Checklist lAnguAge .................................................................................................................44

Checklist MOnItOrIng .............................................................................................................46

Checklist evAluAtIOns ............................................................................................................50

Checklist stOrIes.....................................................................................................................54

referenCes .............................................................................................................................56

YOur PersOnAl nOtes ...........................................................................................................57

3

Page 4: NF_Observing_change2010_net

4

norwegian People’s Aid (nPA) is a Norwegian NGO working internationally with Development and Mine Action programs.

NPA International Program Department works with more than 100 partner organizations in approxi­mately 15 countries. NPA’ s cooperating part ners are responsible for implementing programs and projects. NPA’s role is to support these partners in their program work, and through capacity building and organizational development.

The Mine Action programs has other types of strategies, partners, and approaches compared to the international development department. Standards for planning, monitoring and reporting (PMR) differ accordingly. To a large extent, Mine Action can rely on quantitatively measureable categories in monitoring: scale of areas covered, numbers of mines detonated, number of mine personnel, dogs and rats, at work. The Development Department likewise relies on numbers and figures when monitoring political and social development work. But, quantifiable information is often only the necessary starting point for planning, monitoring and reporting (PMR), and for identifying results.

This book primarily addresses PMR in the development programs and focuses mainly on the qualitative aspects of change.

AbOut nOrwegIAn PeOPle’s AId and this book

NPA will not introduce a new monitoring system, but aims to improve the quality of the information within the existing monitoring systems. In stead of a new system, NPA will encourage a change of attitude in PMR, to simplify systems and language in order to be able to monitor and document results.

This book presents an approach to make planning, monitoring and reporting (PMR) more practical. It advocates the use of adapted tools and simple language rather than the use of global tools, language and methods.

The book is intended for NPA program staff at all levels, but can also be used with and by partners.

The book is divided into two parts. Part one consists of four chapters. Chapter 1 is an introduction to the topic. Chapter 2 is about the development language and how to communicate using less buzzwords. Chapter 3 suggests some steps to take in order to highlight results in program work and PMR. Chapter 4 provides an overview of some basic monitoring approaches. Part two consists of a series of independent checklists.

All the examples used in this handbook are from NPA or NPA partners’ recent plans or reports. They have not been selected because they stand out as extreme, nor to point a finger at the program. They have been chosen because they illustrate common challenges.

Page 5: NF_Observing_change2010_net

5

This book is the result of the following experiences:

• Over the last decade NPA has rolled out different methods for monitoring and evaluation (M&E); the Logical Framework Analysis (LFA), Program Evaluation Support (PES) and Most Significant Change (MSC) being the most important ones. We have learnt that a one­size­fits­all PMR system that is imposed from a head office, does not match well with needs in the program context. In addition, the NPA partnership strategy emphasizes local ownership of programs, agendas, tools and methods. Centralized systems and standards are, not well suited for accommodating local ownership, capacities and needs.

• Annual plans and reports for 2008/2009 have

been studied particularly with a view to identifying topics to be addressed in a PMR handbook.

• The NPA project ‘Unwrapping results in planning,

monitoring and reporting’ 2008­2010 has in­volved more than 130 key NPA and partner staff in workshops in Balkans, Ethiopia, Lebanon and Iraq, South Africa/ Mozambique/ Zimbabwe, Rwanda, Angola, Tanzania, South Sudan, Burma, and Cambodia. In the process we learnt that many key program staff consider monitoring a difficult, technical and demanding task. The abstract theory has made reports difficult to write, read and

understand. Many interesting findings and results are therefore not documented and shared. We have also learnt that there seems to be no lack of formal M&E systems or M&E expertise, but maybe a lack of ownership to planning, monitoring and reporting processes.

• In 2006, The Norwegian Association of Disabled (NAD) published a manual on results based plan­ning and reporting. The process was facilitated by K. Berre. The current ap proach leans on ‘pilot’ experiences made in the NAD network. NPA is grateful to NAD for supporting this next phase of result work.

• Thanks to all workshop participants who have contributed with constructive and critical comments to make this handbook a working tool in progress. Thanks also to Helle, Sveinung, Martin, Eva, David, Kristine and Anna at NPA head office for patient and impatient commenting on the chapters and checklists throughout the writing process. We hope that the book will be a tool for making planning, monitoring, and reporting easier, more useful and meaningful.

Oslo, October 2010

Page 6: NF_Observing_change2010_net

This chapter outlines the approach and NPAs choices regarding planning, monitoring and reporting (PMR).

Chapter 1

ObservIng suCCess And fAIlure

Organizations who want to see results in advocacy, empowerment or mobilization, have for decades been discussing how to measure, monitor and describe change in these areas. So far we have not been very successful. We keep track of activities, by counting for example the number of workshops held

and the number of participants. This is important information in all PMR, as it justifies budget spent and activities completed. But when we leap from doing some activities to assuming that the activites were a success, and that for example awareness has been raised, this is a shortcut.6

Page 7: NF_Observing_change2010_net

nPA’s choices for planning, monitoring and reporting

While many manuals talk about monitoring and evaluation (M&E) as part of the same package, this approach separates planning, monitoring and reporting (PMR) on the one hand from evaluation (E) on the other. Monitoring and evaluation are of course analytically connected, but there is at least one good reason for the separation. Evaluation approaches often have high methodological standards, a separate budget and often scientific requirements for measurements and analysis. Monitoring on the other hand is a routine task, regularly ‘checking up on how things are’ and recording this information in a way that is useful both to staff, partner, international colleagues and donor.

When evaluation standards are also applied to day­to­day PMR work, monitoring becomes a task that is too complicated and time consuming for program staff. Program staff know perfectly well about status, successes, and failures in the program. But when this knowledge is not systematized or passed on to others, monitoring becomes informal, and or random.

Partner organizations vary in size, program focus, management styles, capacity and history. They operate within very different political, social and cultural contexts. Each partner organization may have several different donors or cooperating partners, each with different requirements for monitoring and evaluation procedures, and different formats for financial and narrative reports.

nPA understands a result to be ‘the changed situation for the target

group/organization/ partner after the activities have taken place.’

When civil society organizations improve infrastructure or health services they report ‘tangible results’ that are easy to measure and to describe. For example, after an earthquake, bridges and buildings are re con­structed; schools stand complete with blackboards and are filled with children and teachers. Broadly speaking, monitoring such a project is an easy job: checking that the buildings match the drawings in the plan, that the classes are full, that there are pupils during different times of the school year, that there are teachers and they are paid, that there is a curriculum etc. Count, tick off, make a note, take a picture and compile the report. Program staff, colleagues and donors easily be made to understand the nature of the result, the challenges and the progress.

Today’s results in NPA development programs however are mostly ‘intangible’, more unpredictable than planning to build a school, and far more difficult to measure. How can a result such as ‘women in 11 communities have been empowered’ be made more convincing? Even though we cannot take pictures of empowerment, like we can with the school, there are some options for better documenting achievements and failures.

7

Page 8: NF_Observing_change2010_net

PMr is about observing and describing reality

the information we put into a system is more important than the format in which

the information appears

Many different methods can be relevant to planning, monitoring and reporting (PMR). PMR should make it possible to document actual achievements in social political programs in civil society, so that they can be understandable, credible and monitorable.

PMR should be about the ‘evidence on the ground’, to justify the program’s/project’s overarching goals, objectives and principles.

Monitoring tools should therefore be ‘bottom up’, reflecting context, local capacities and the specific choices made by people in the target groups.

PMR systems are dynamic, not static. They are best developed step­by­step, growing with the capacity of the program teams who need to have ownership of their methods in order to use them properly. This is far more important than having a perfect system in place. Observing and describing results does not depend on the method or system used. It relies on how the chosen method is used, that it is used, and what goes into the system. Other important factors are the commitment of the management to documenting ‘things as they really are’, personal interest, curiosity, interview techniques, people skills, etc. Monitoring and communicating results must be an ongoing task for management and all involved staff.

NPA does not want to roll out yet another PMR system to programs and partners that may have good systems in place. A this stage, a new system will not solve the core challenges of monitoring: actually doing relevant monitoring and using the findings from monitoring to improve programs and projects.

This book is meant as a supplement to, not a replacement for, other relevant results­based approaches and methods. It does not go into detail about areas that are extensively covered by other manuals and guidelines (see references). This book will also deal with some challenges with many systems such as logical framework analysis (LFA): i.e. that they tend to grow and become so complex that a specialist is required. NPA’s concern is to keep the systems and tools useful and manageable to program staff.

8

Page 9: NF_Observing_change2010_net

This chapter is about how we communicate in development programs, how we often fail to communicate, – and what we can to improve the language.

Chapter 2

sHArP lAnguAge

Keep it simple

Global professional languages such as in law, medicine, and social sciences have specialized terminology. This can sometimes ease communication within these professional groups. But unlike these professions, the development language is not rooted in just one

professional field. People working in development have different professional backgrounds, from politics and agri culture to carpentry and philosophy. They work in different countries and different social and political settings. Development workers have

’The importance of mobilization maintains to be a vital focus and gives a

sustainable momentum for organizational development enhancing and facilitating a democratic participatory and

sustainable process, and is of great importance in order to mobilize in democratic processes…’

‘What she means to say is that democracy would be a

good thing.’

‘Yes that is fine, we all agree. But hey how is

your program doing?’

9

Page 10: NF_Observing_change2010_net

different traditions for communication, and for using, understanding and writing English. They bring different terminology and expectations with them into their work, but often speak and write using universal standard terms.

Planning, monitoring and reporting (PMR) requires skills, but not necessarily in designing or using complicated technical formats and systems. The most important skills in PMR are non­technical: the ability and will to describe results as they actually happen.

buzzwords in development language

Ask any citizen or politician about ‘good governance’ and you will get different answers. In order to find out what ‘good governance’ means in development programs, it needs to be defined. definitions keep on expanding. the world bank now has 340 different criteria for ‘good governance’.(Øyvind Eggen, NUPI)

Development language is rooted in laudable global conventions, political theory and internationally agreed standards. It consists of a mix of technically neutral terms (for example: project cycle, indicators, networks, workshop, and implementation), and words signalling ambitions, values and political positions (democratic institutions, marginalized groups, parti­cipatory, capacity building, empowerment, good rights­based, strong civil society, gender equality) universal human rights, good governance and transparency.

These phrases are universally used by politicians, diplomats, UN leaders, and grassroots movements all over the world, as well as dictators! NGOs, American and African presidents, and activists all use them in speaches or documents – but without necessarily showing what they mean, or that they mean it.

Good intentions and political positions are important, but can easily turn into empty buzzwords if the term remains general.

Development buzzwords are a major obstacle in PMR because they hide people and what the change means to them.

For example, many programs report that ‘mobil ization’ has taken place, but they often fail to show how, that networks ’are in operation’, but not which or how or for what, that ‘strengthening of an organization’, ‘democratic structures’, ‘partnership’, ‘participation’, ‘redistribution of power’ has happened.

10

Page 11: NF_Observing_change2010_net

exA

MP

leex

AM

Ple

When a project in Guatemala on paper seems to have the same challenges, solutions and results as a program in the Balkans, it is likely that buzzwords have

taken the place of description, and that interesting information is hidden.

Two examples from an NPA annual report, with comments:

Quote from the report: ‘land planning systems and structures have been established and are successfully institutionalized.’

this result statement does not answer:

what type of systems and structures? established where and by whom and for what? what institutions? where are the people in this result?

(revised version) ‘elders from the pastoralist group vb and regional authorities in v have met every 3 months the last year to discuss cases of land dispute. Of 7 cases, 3 were solved: 2 cases of illegal fencing of water wells, 1 case of privatizing common land solved.’

Here the result explains what the standard terms mean for the people affected.

Quote from the report: ’the ‘women can-do-it’ program in (country) contributed to the process of building institutional gender mechanisms.’

‘to Contribute to the process of building institutional gender mechanisms’ might be fine at the level of strategy or policy. As a result however the statement requires precise, selected, specific docu mentation about how the program changed what situation, and for whom.

1

2

11

Page 12: NF_Observing_change2010_net

‘values’ or ‘results’?

ideological position (yin). ‘Results’ words on the other side (yang) provide information about how this ideological position is put into practice in that particular program.

As a rule, policies, goals and stra tegies need a different set of words than those used for planning, monitoring and reporting (PMR ).

PMr results should describe some thing that can be noticed, seen, and monitored. ‘democracy enhanced’ might be the overall conclusion after an evaluation of an obviously very successful program after 10 years. ‘democracy enhanced’ in a PMr document however merely signals a general value or intention.

In order to be able to identify, monitor and docu ment good results, a first step is to distinguish between how you talk about values/visions, and how you talk about concrete achievements.

This model is borrowed from yin and yang in Chinese philosophy, and symbolizes two contradictory but complementary principles, such as night and day, hot and cold, male and female, etc. Used on PMR, the circle represents the entire program, encompassing goals as well as the smallest activity. ‘Value’ words provide information about the organization’s

12

Page 13: NF_Observing_change2010_net

exA

MP

le

show don’t tell

Finding alternatives to buzzwords can be difficult at first, we are used to them, and they pop up almost automatically. Detecting if buzzwords are used is therefore the first step on the way to documenting

results, and must be done by program staff at all levels: management, program countries, head office, and partners.

More examples where good results are ‘hidden’ inside buzzwords:

reported result

‘Youth have enhanced their knowledge of their different rights, developing their capacities and skills of participation in community.’

what youth? where? what is enhanced? what type of knowledge, what different rights? why, how, and for what do they participate in their communities.

‘new networks have been established between communities ’

what kind of networks? what do they achieve? what kind of communities? established: what does it mean, juridical, informal, as part of local government?

‘Marginalized communities in H and v opened up through a process of debate.’

who are the marginalized communities here? what is meant by ‘opened up’? what is a process of debate?

‘decreased incidence of land rights related conflict in the project area.’ show how!

Show, don’t tell. Unwrap. Select the most important information. Depending on the requirements and re­porting format, use numbers, names, facts, illustra­

tions, personal stories, and descriptions to show what you mean. Then select the most relevant information, avoid generalisations.

13

Page 14: NF_Observing_change2010_net

Clear language is good for democracy

Development language and buzzwords standardize relationships and mould fundamentally different approaches into one blueprint version of reality. When you work in partnerships you may share overarching objectives, but your practices will vary according to local solutions and situations in the context. A respect for diversity in approaches and language should guide all PMR work.

In cases where documents and reports reveal politically sensitive issues, clearly spelling out the achievements and challenges may create difficulties in countries ruled by authoritarian regimes. Transparent information about what the organization does may be politically sensitive and therefore dangerous. Not all organizations and target groups will want such public exposure and would therefore prefer not to publish details about their outcomes and performance. If this is the case, alternatives to communicating transparent results­based PMR will have to be discussed in each individual case.

Writing is a profession and writing skills are a personal creative gift not learnt overnight. Writing concisely and precisely, especially in a foreign language, is challenging. But clear language is a prerequisite for transparency that will help improve the quality of dialogue, learning, and ultimately strengthen democratic structures.

Changing old communication habits

Changing old habits is more uncomfortable and difficult to do than changing a method, system or format.

Personal effort and good management are needed to start using a simple language and to creating an atmosphere where reporting ‘failures’ is encouraged.

Buzzwords are not only a habit, but also comfortably fuzzy: they hint at something good and can at the same time cover all or nothing. They provide room for flexibility and interpretation. Buzzwords ‘warm our hearts’, as one workshop participant said, ‘they are like old friends who we do not want to say goodbye to’.

It takes tough decisions from managers, as well as effort from staff, to change attitudes and habits, to select what to include in a document, what to keep in the program files, what to delete, and what buzzwords to unwrap.

14

Page 15: NF_Observing_change2010_net

15

Chapter 3

unwrAPPed results

This chapter suggests some steps towards communicating program results.

Many programs fail to show what results they have achieved. Not because of ill will or secrecy, but because good and bad results alike are hidden in layers of visionary terms and buzzwords. Good achievements are sometimes accidentally stumbled upon by outsiders to the program, who ask program staff informally, who then tell the stories. Some times evaluations reveal results and successes that have

not been previously known even to colleagues or other departments in NPA.

Un wrapping results is crucial in good planning, monitoring and reporting: to learn from experiences, to improve programs, and to communicate these to others.

Page 16: NF_Observing_change2010_net

this result is from an annual report. the actual result is a great success – but hidden in layers of buzzwords. ‘The level of awareness of the citizens on the developments that took place throughout the year was markedly improved as evidenced by the maturity in the level of participation and engagement with traditional leadership in the public meetings.’

A reader of the report does not know the program and thinks...‘It sounds nice, and I think I trust the organization. But what actually happend or for whom? The ‘improved level of awareness’, ‘maturity in the level of participation’ what does it mean? What is the specific change here? Are there any people involved?ex

AM

Ple

16

To some extent, the following steps overlap with previously mentioned points, but are still presented separately.

Page 17: NF_Observing_change2010_net

Input:ActivityInformation here

often overlaps with output →

OutputInformation

overlaps with ← activity

ImpactOutcomeOutcome

results

Includes everything invested the project in terms of money, man power, or infra structure.

use the results chain to organize PMr information

In the first phase of planning, monitoring and reporting (PMR) sessions, place essential information only into 5 main categories.

Includes everything ‘done’ by the partner (for the projects) or by nPA (with organizational development of partner) in order to obtain a result: it could be paying the salary of an accountant, holding workshops/training sessions, preparing radio programs, writing for or printing papers, etc. the activity should not say anything about the quality or aim of the activity.

refers to the direct and immediate consequence of the activities: expected (plan) or completed (in report). An output is a step on the way towards an achievement, but is not yet a result.

In most programs it may take several years to be able to report more than at output/activity level.

(example: If the activity was ‘partner org. to hold 3 training sessions on gender equality (ge)’, an output after 1 year could be ‘26 female teachers completed training on ge’.)

refers to the change that the organization and/or the target group will notice as a short, medium or long term consequence of the activities. In the report, an outcome is described as either positive or negative, as planned, or different from planned.

Indicators (see chapter 4 and appendix) are only required by nPA at this level.

Is the result of many factors, some of which lie outside the control of the program; effects of programs in Cs as well as other processes. Impact is measured and described at society level.

Impact is usually not measured or monitored in the ordinary PMr process, but evaluations at national level, or in larger reviews initiated by a donor, nPA or partner

(see chapter 4)

steP 1

17

Page 18: NF_Observing_change2010_net

focus on outcome in PMr

All links in the results chain are important in program work. But in PMR sessions for planning, programming, indicators and reporting, special attention should be paid to making information in the outcome link specific, realistic and concrete.

When planned results at outcome level have been formulated clearly, information to fill the other links will come more easily. Stay realistic: adjust the ambitions for the desired outcome according to available recourses.

Sometimes all a program can show in terms of achievements during its first years, are outputs (for example the number of participants who attended a workshop). An output can tell us about a group of people’s physical presence at the workshop. This is important information to monitor and to record in order to assess the performance of the organization.But the recorded output does not tell us anything about the quality of the participation or whether the workshop made any difference to the choices made by the participants.

steP 2

18

Page 19: NF_Observing_change2010_net

exA

MP

le

A workshop might in the long run lead to ‘heightened awareness’ and changes in the way that community leaders do their job in the future.

tHe ObjeCtIve: ‘to reduce incidences of gender based violence (gbv) in district x by year 2012

InPut: 30,000 nOK (from nPA to partner organization)

ACtIvItY: Partner organization xx conducts 2 weeks training of 25 high school teachers on gender based violence gbv, semi annualy

OutPut (in report): 19 teachers completed the training on gbv (year 1)

OutCOMe YeAr 1: 2 teachers have included gbv in their teaching routine. 57 students have knowledge about gbv. (Indicators at this level are important: for example: do the curricula used by the teachers include gbv topics? students report that teaching takes place in this subject?)

OutCOMe YeAr 3:1) 3 schools in the district have established a fight violence student board.

In total 17 cases of gbv have been addressed by the principal in monthly school meetings since program started.2) 7 teachers have included gbv in their lessons, 12 teachers have never applied the training in their teaching.3) the public discussions about gender equality have led religious leaders in 3 school districts to reinforce

the rule for girls to wear modest dressing and head cover and while at school.

IMPACt: (not possible at this stage to say whether the number of incidences has been reduced. evaluation of the whole program will take place in january 2012)

Quantifiable participation in training or a workshop is often all we have to show in the first year(s) of a

Only qualitative monitoring can tell us anything about what kind of change we look at, and about the process.

Case: A results chain, with emphasis on information at outcome level:

program. NPA’s main donor, Norad, includes this type of OUTPUT as a result. NPA agrees that activities and output are achievements in their own right, that they are necessary stages towards the result, and need

to be monitored and reported. However, we draw a line between output (completed activities etc.) and outcome (the change as a result of the activities), and only count the latter as full results.

19

Page 20: NF_Observing_change2010_net

When working to identify results, outputs must be documented, but measurements must go beyond activities and output. Look for the short and long­term consequence of activities. Outcome state­ments in the plan and report should be clear and concrete, specifying timeframes and target groups (beneficiaries/members/organizations and others).

The results matrix should contain key information, and be kept as simple as possible and have ‘outcome’ at the centre of discussions.

results are about all kinds of change

Development programs or projects that achieve all plans are rare. Results reported must reflect this reality. Results can be lacking, (for example that ‘no change within the target group has been noticed in spite of 5 years’ training’) or even negative (‘as a result of the training, expectations among the participants about gaining access to the land by the lake have been too high. Disappointed project participants, who after years of lobbying still had no access, decided to close down the project and looted the partner organization’s office.’)

In some cases (for example for CS organizations in Zimbabwe and Palestine) few people expect a program to produce a positive outcome. Just keeping an organization afloat under difficult con ditions may require a tremendous effort, and this might be a good result given the circumstances. Reports must not be embellished to make results look ‘better’, but strive to capture all kinds of processes and changes. In order to be credible and trustworthy, setbacks must also be recorded. Reports should avoid generalizing and standardizing statements (Like: the project experienced serious setbacks), but specify or give examples of these setbacks, the main results, be they negative, lacking or positive. Negative results must be recorded, reported, and later analyzed. These kinds of experience are our best source for learning and improving next time around.

Mobilization efforts do not always end up just as planned!

steP 3

20

Page 21: NF_Observing_change2010_net

distinguish between words describing ‘results’ and words describing overarching ‘values’

Fortunately most development workers are guided by high ambitions and political visions of a better society. These values are the foundation of organizational identity, policy documents, strategies and visions. However, during planning, monitoring and reporting (PMR) the overarching values need to be put aside. Not because values are unimportant, but because values­based language makes it difficult or impossible to monitor and communicate actual findings. Planning, monitoring and reporting (PMR) is about ordinary, and sometimes even disappointing, reality. Evaluations and assessments on the other hand will make the necessary link between empirical findings from PMR and overall values, strategies and policies. (Also see page 14)

Select only the words that serves the PMR purpose.

steP 4

21

Page 22: NF_Observing_change2010_net

expected result in a plan and achieved result in a report

A plan is based on professional estimates and qualified guess work, since it looks into an unpredictable future. A plan may therefore be a bit general. However, the report at the end of a planning period must reflect what took place, where changes occurred, and even sometimes how these changes

were dealt with. The report must demonstrate that systematic monitoring has taken place throughout the reporting period. If the report merely repeats the same phrases as the plan, this may signal that the project/program was not monitored, that the report is a product of a desk based cut­and­paste desk job and/or that the reporting does not pick up on changes.

steP 5

Plan (for year x) Capacity of the organization will be strengthened….

report (after year x)The capacity of the organization has been strengthened.

exA

MP

le

Putting together a report does not mean that all avail­able and relevant information must be presented. The trick is to select from among the small and/or big changes, or lack of changes, that have been registered throughout the period. Examples or cases should also be chosen as illustration.

Example in which the report does not add information to what was presented in the plan:

22

Page 23: NF_Observing_change2010_net

Plans should ideally be clear and measurable. Many plans however are general, intangible and point towards a rather vague positive change: ‘Raise awareness’, ‘increase capacity’, ‘empower’, etc. When

monitoring and reporting, these buzzwords must be given a specific content, they must be ‘unwrapped’ to reveal the hidden meaning: What kind of capacity? What is the product so far? Who was affected?

Plan Management capacity of the organization will be strengthened

results are hidden … increased mobilization…

… capacity of the organization has been strengthened…

… more people have gained access to natural resources.

… number of women’s rights violations has been reduced.

… the interest in taking part in dialogue with authorities on land rights has increased among community members.

report (result)The organization has written and uses 2 out of 5 planned steering documents (strategic plan and personnel administration guideline)

results are unwrapped→ In June 2009, 153 persons participated in a rally against

X legislation in the country capital.→ organization X has elected a management team and has

held its first board meeting→ 32 families in South Y are again allowed access to grazing

ground that had been unlawfully fenced off by private farmers.

→ The number of cases reported to the NN police of gender violence fell from 63 in 2007 to 15 in 2009.

→ In one of the 3 target communities (ZX), 2 male and 1 female representatives from the community participated in an elders meeting on conflict resolution. (A case would be great here!!!)

exA

MP

leex

AM

Ple

Example: How information could be reported:

Examples

23

Page 24: NF_Observing_change2010_net

specify at which level results are found

Broadly speaking, results in the NPA network are found at three levels:

1. nPA supports partner programs/projects in com munities, and results are to be measured as change with networks, target groups or con sti­tu encies. NPA in cooperation with the partner is re spons ible for monitoring and reporting these results.

2. nPA supports partner organization on capacity building or organizational development (OD). Results must primarily be monitored following the OD process with partner. In addition, NPA is obliged to periodically assess the work of the partners with their constituencies.

3. nPA implements their own programs, and monitors progress and results as part of ordinary program work.

steP 6

level/relAtIOnsHIP InPut ACtIvItIes OutPut OutCOMe IMPACt

nPA → Partner Y → program

10,000 nOK

Y hold 2 workshops on land rights for peasants in district A+C

46 peasants (3 female) have attended the training

In (date), a peasant group from district C signed a petition to local government, demanding xZ.

Plantation owner P who had un law fully grab bed com mon land has evacuated the site. 71 small holding farmers have moved back.

nPA → partner Y 10,000 nOK

nPA covers training fee for chief accountant working for Partner Y

Admin staff uses accounting system x on yearly accounts.

Accounts of organization partner Y accepted by international accountant firm.

Accounts of part- ner Y follow inter-national standards. Y attracts more donors.

nPA → program 10,000 nOK

5,000 eucalyptus seedlings planted

Approx. 2,300 seedlings survived 1st year.

3 students at agri-cultural school re spons ible for plantation and seedlings

exA

MP

les

1.

2.

3.

24

Page 25: NF_Observing_change2010_net

This chapter is based on the understanding that monitoring is not primarily a matter of system or methods, but a matter of approach and attitude. This chapter encourages monitoring systems to be designed locally, and to be flexible and simple.

We monitor in all walks of life. We monitor our children, the food, our health, the weather, the fuel consumption of our car. In these daily routines, we use a baseline: what is ‘normal’ in the circumstances, and indicators: what are the signs that tell us that things are not ‘normal’. Everyday indicators are based

on the be haviour of the healthy baby, the colour and smell of good food, the colour of the clouds in the sky, or the average fuel consumptions per km. These signs help us decide what step to take next: seeing a doctor, throwing away smelly food, taking an umbrella for the day, selling or repairing the car. 25

Chapter 4

MOnItOrIng - A frAMe Of MInd

Page 26: NF_Observing_change2010_net

Monitoring development activities, projects and programs is based on a similar common sense logic. In order to know whether things are on the right track, we need to look for signs of change, record them, and compare the situation today with how things were yesterday or last year.

Without a baseline, and effective and practical monitoring, program work can become random. Monitoring can help good programs become better, and can help programs that are on the wrong track get back on the right track. Without monitoring, it is difficult to justify keeping on doing what we are doing, or why anybody should continue supporting it.

Design and use a monitoring system that suits your needs and

produces the documentation required.

the rise and fall of monitoring systems

All program countries and partners monitor their work either formally, or informally. Most larger partner organizations and country programs have a system in place for planning, monitoring, reporting and evaluation. This is often based on a version of LFA (logical framework analysis). Many partners and programs find these monitoring systems useful, other finds it complicated to use and may there fore not monitor systematically. People involved in programs have detailed knowledge about pro gress and results. However, this information is often lost on the way from the individual memories and experiences, through the monitoring formats, to the report. Plans as well as reports end up, as the previous chapters have shown, not doing justice to interesting programs by following standardized language and complicated formats.

In development, monitoring and evaluation (M&E) has become a profession for specialists who are able to utilise complicated LFA systems and M&E vocabulary. While civil society organizations proclaim that programs are partner oriented and bottom up, many monitoring systems are designed at donor level, are managed by academics with special skills, and have a clear top down effect. NPA wishes to reverse this trend by simplifying the monitoring approach so that PMR can be managed by program practitioners. Program staff/stakeholders should consider moni toring part of ordinary program work. The monitoring method should therefore be kept under control.

Over the last 10 years, NPA has made attempts to streamline monitoring and evaluation functions. LFA has been recommended as the norm, but other systems have also been rolled out, such as PES (program evaluation system), and MSC (most significant change).

26

Page 27: NF_Observing_change2010_net

Looking back, we have learned that introducing new methods does not lead to systematic monitoring taking place in the programs. This is also a finding in ‘2007 Organizational review of NPA’, Kruse et.al.Other approaches are required.

nPA’s approach to monitoring

Monitoring is about more than a system, it is about a monitoring culture. The chosen monitoring methods should be designed to fit the specific needs, the capacities and size of each program/organization. Monitoring should be practical, participatory, ensure quality in the program, and give others insight into the program (transparency). A monitoring system may look impressive, but in NPA’s view it is only valid if: 1) it is actually used regularly, and 2) if it manages to produce relevant information for internal monitoring, and for plans and reports. Data obtained from monitoring should be compiled, and selected to provide content for reports and plans. At the same time, precise plans and reports are the basis for good monitoring. Monitoring should not be scientific and impressive, but practical and ‘good enough’.

Measuring change

To monitor change in ‘intangible’ areas like attitudinal change, mobilization, awareness, empowerment and organizational development, we need both measurable (quantitative) and descriptive (qualitative) methods. Monitoring progress and change in these programs has often been a matter of counting activities (the output), for example the number of workshops held, number of participants, types and numbers of leaflets printed. This is important information for

monitoring, but not enough to convince anybody of any real progress/change in the long run.

To measure ‘intangible’ results, the most important thing is to describe who the change is for and how the interventions have affected specific people or organizations. Have workshops triggered a change in attitudes, choices, or actions? Or are the topics discussed in the workshop forgotten? How have organizations and people acted or reacted? Have new strategies been made? Systematic monitoring of social change often involves asking open­ended questions, taking note of different and often contradictory answers, and finding a way to document and report these findings. Social projects and programs,advocacy, awareness, and empowerment cannot be measured by documenting activities and assuming outcomes.

An ‘intangible’ result reported as ‘Capacity building has enhanced organizational struc-tures’ does not come across as a credible result, but rather as a rephrased overall objective or goal. the statement contains two general ideas, but hides the result: what happened, the people/ the organization, and how the result appears to those concerned. An example is needed to illustrate what the result may mean. Ask: who, what and how? Make the sentence active, for example ‘3 members of the regional association of lawyers have revised the statutes of the organization, x’

27

Page 28: NF_Observing_change2010_net

Make sure everyone involved in program monitoring is able to access, understand and use the method. Management must actively support the chosen approach to monitoring in order for it to work effectively.

Below are some of the main steps that must be taken.

1. start monitoring on a small scale and pilot (try it out) whenever possible. Do not overload the system with references to objectives and too many indicators.

2. revise existing plans: If necessary, reformulate objectives and expected

results (outcomes) so they are more realistic, so that monitoring becomes easier (see previous chapters).

3. select which outcomes/results to monitor: Disaggregate, and unwrap, each outcome; make

it concrete and clear. Select the most important outcomes to monitor only.

example where the outcome statement need unwrapping: ‘to increase organisational capacity (OC) of Organisation x’. unwrap OC to specify key areas: what kind of OC? where? who? by when?

Making the outcomes clear already during the planning stage is crucial to be able to set indicators, baselines and targets by which to monitor and evaluate.

Qualitative descriptions are less precise than quantitative measurements. This does not mean that qualitative descriptions/measurements are unreliable. Monitoring is not done to produce empirical evidence that is 100% precise, but to come up with information that is credible, relevant and good enough.

Change involves people and the choices they make as individuals and as groups. Political change, empowerment, awareness, capacity building is therefore bound to take place in many different forms. When identifying results, we need to measure HOW awareness, empowerment, etc. is perceived. Not only THAT training or other activities took place.

setting up a Monitoring system

First of all, any monitoring system must be useful rather than impressive.

Check available literature and formats on the topic if necessary. Remember that these systems are often designed for large organisations. Start by picking only the essentials parts. Make sure that systems and matrixes are few in number, simple and results­based. Start identifying and ‘unwrapping’ your buzz words as early as possible, if necessary rewrite your plan.

List the different requirements of different stake­holders (donors, government, partner organisations) with regard to what monitoring should cover. If possible, find a pragmatic compromise between these requirements and streamline the various information demands. Keep in mind that many donors are flexible and willing to accommodate your needs. Set up your monitoring system so that the required information can be easily accessed and compiled into the various reports.28

Page 29: NF_Observing_change2010_net

4. Make sure the monitored outcome reflects the relationship/level where the funding takes place

Examples of different relationships/ levels: NPA supports partner in their various programs, or NPA supports partner in building organizational capacity of partner, or NPA implements its own program) (see step 6 in chapter 3.)

5. Prepare and select the basic information needed in planning and reporting:

Four basic levels of information are needed when

exA

MP

le

NPA recommends that a planning/monitoring/reporting process should start with a group process where the aim is to formulate and select key information using a simple format (e.g. as above). In PMR group work, use an old fashioned paper flipchart and colour markers so that the group can follow the process: how overlapping infomation is deleted, and

HOw Activities/output

wHAt Results/outcome

HOw dO YOu KnOw Indicators

wHY Objective/impact

14 training seminars, 2000 leaflets, etc.

Year 1 (output):70 girls have enrolled in higher education

Year 3Approximately 25% of women in Y have their own mobile phones

Year 3 (unplanned)divorce rate increased by 50% in Y

from baseline:-no. of girls enrolled in higher education in Y: 13

-women in Y have no or very little information about the law and their rights

-women are not allowed to own mobile phones

Awareness raised about gender equality among 100 women in Y province

concepts are unwrapped. A challenge is to keep phrases belonging to overall policies and strategies away from statements about results, indicators and activities. (See also chapters 2 and 3). Distilling concepts and separating between the different levels and terminology is essential in order to effectively monitor and document outcomes.

starting a results­based PMR process (see also chapter 3):• The activities: How are you going to reach the

objective?• The outcome: what change the project will

change (plan)/has achieved (report)?• The ‘How do you know information’: the signs

(indicators) showing how the process towards change will be/has been monitored

• The overall reason why the project was started (objective)

Example illustrating the different types of information:

29

Page 30: NF_Observing_change2010_net

Some rules of thumb for this process:• The statements must be as precise as possible.

Be selective, avoid general izations.• Figures, timeframes, places, target groups must

be presented at least at one of the levels.• Keep in mind throughout the PMR process

that the choices about program strategy, root causes and overall principles have already been made. A PMR workshop has a different and complementing agenda. Monitoring results must stay close to facts and the practical sides of a development program.

6. select basic indicators Indicators are the main tools for sound planning

and monitoring, and essential in order to monitor result outcomes. Indicators are the ‘footprints’ that show where the project is moving, the signs that point towards progress or change of a program or project. Indicators show that the project is going in the expected direction, that nothing is happening, or that the project is having negative effects. They provide the key question in monitoring: ‘How do you know the result will be/has been achieved?’

nPA requires indicators for the outcome level

only Some donors require indicators for objective,

outcome and output levels. NPA recommends indicators be worked out for outcome level only. The reason for this is that indicators at objective/impact level tend to become broad and immeasurable, and often overlap in content with the outcome statement. Indicators at output level are in effect often merely the quantitative element from the output statement. For example, the finding ‘215 high school teachers have been

trained in gender and human rights’ is fine as it is, and does not need to be separated into two blocks of information (as output (training conducted) and as output indicator (215 teachers trained).)

Indicators often contain good information for communicating with others

Indicators are essential for making the program/project meaningful, also to others. They help ensure transparency, because they provide concrete facts about where we are on the path to reaching our objective at all stages. They allow us to gather systematic information about the project progress without having to wait for the evaluation.

Basic indicators are or should be part of the base line information. They allow monitoring to be based on what the situation was before the program started. For example if the program aims to increase the participation of women in local government, the baseline would provide information about where women participated, how many, how often, and how. If a program/project is ongoing without a relevant baseline, this information has to be established as soon as possible.

select few, manageable and good enough indicators

Identify key indicators as early as possible in the process. All involved project and program staff should brainstorm, and then select a few, but relevant, indicators according to the criteria CREAM: clear, relevant, economic, adequate, monitorable (see checklist). Take out indicators that are too ambitious, costly and difficult, even though they might be impressive.

30

Page 31: NF_Observing_change2010_net

Qualitative indicators Descriptions, subjective views, opinions, ob ser­

vations, examples are all qualitative indicators. They are needed to measure whether the target group experiences or initiates any changes after an activity has taken place. While quantitative indicators are essential for short­term monitoring, qualitative indicators are important for medium and long­term monitoring. Qualitative indicators can for example tell us whether certain training has made any difference to the participants: changes in behaviour, attitudes, or actions. These indicators should be established early in the project life, and preferably as part of the baseline information.

All indicators must be specific, and disaggregated according to gender and/or other relevant groups (young/old, ethnic group, power holders, religion, etc.). This may require information to be collected separately for men and women, for different ethnic groups, for different age groups (f.ex. children, youths, adults, elderly) for different economic (f. ex. rich, poor) and social groupings (for example agriculturists, pastoralists, businesses).

standard indicators NPA recommends that indicators primarily be

selected based on the context rather than from universal lists. Pre­established general indicators often fail to reflect changes in a particular social context. They can be an invitation to do ‘desk monitoring’: ticking off indicators from a list without basis in the facts on the ground. Indicators should represent the bottom up/actor’s perspective of monitoring.

Quantitative indicators Count whatever can be counted: the number of workshops held, the number of people who participated, the number of days etc. Quantifiable indicators can also signal important results: e.g. an increase in the number of female voters could be a good indicator in programs where women’s participation in politics is the aim. Important here is that the baseline information about the number of female voters before the program started is available.

Change takes time. Sometimes the only trace of a result at an early stage of a program/project is quantitative information. A completed activity or a group of participants after training are necessary facts but does not indicate a change for a group of people/an organization, a result. It can tell us that the organization implemented the activities planned for.

31

Page 32: NF_Observing_change2010_net

Suggestion for where to look for indicators for the outcome involving ‘Increasing women’s participation in politics’

Quantitative measurement: • Count the number of registered female and male voters• Number of women’s networks• Number of women in these networks, number of men

in these networks• Number of men supporting women’s rights/gender

issues

Qualitative indicators could be found here: • Women’s opinions about their own political participation,

changes• Women’s chances of being heard• Women’s opinions in current political issues• What is the role of the women’s networks in the

community?

exA

MP

leex

AM

Ple

Quantitative measurement : • Number/type of leaders who received information or

participated in activities• Number/kind of material produced and distributed• Number of presentations or meetings held with

opinion leaders• Number and kind of media coverage that the

presentation or meeting got• Number/kind of people who got the information• Number/kind of people who engage in network/

organizations that work with the topic

Qualitative measurement: • What do leaders/opinion makers know about the topic?

Ask this as a baseline, then with regular intervals.• How many leaders (opinion makers) support the issue in

public? Any changes?• Have leaders changed their policy or practice as a result

of the activities? Be specific!• Have the messages/issues from the program been in­

corporated into the documents of decision makers? Which?• Is it possible to measure/describe increased public support

to these political themes? How?• Have people become curious about the topic? Do you

observe that attitudes have changed?• Do certain groups of members/beneficiaries have new

know ledge and interest in the topic/issue? How is this noticed?

Suggestion for where to look for indicators for the outcome ‘Advocacy/capacity building of …’

These examples are adapted from Ann Kristin Johnsen’s

presentation (NORAD)

32

Page 33: NF_Observing_change2010_net

exA

MP

le

These questions could be used for deciding indicators to measure ‘reducing violence against women’

When selecting indicators, keep in mind that you need them for doing actual monitoring. Stay realistic!

7. stories and cases as part of monitoring systems

Stories can be a good supplement to ordinary monitoring systems as they provide more in­depth qualitative data on results. They can help com­municate and analyze changes in a program/project where the result is particularly relevant or interesting, where the exact effect is not possible to predict in a plan, or where the preset indicators cannot capture important changes. Stories, and the collection of them, also enable the program staff to monitor in a participatory way.

Stories are becoming increasingly important in documenting progress in social and political de­velop ment work: cases, examples and quotes can provide a good insight into a topic. The advantage of stories is that they place people as actors in the programs and projects, which is necessary if we wish to document how change affects people, and in order to get other people interested.

A classic situation is a result statement where there seem to be no people involved: ‘active participation of women in local governance has increased’. If this is indeed a good result, the statement deserves showing how the women participated. This could be done for example by combining quantitative information and a quote from an interview to be included in the report in a text box:

result: After 4 years, out of 32 women who participated in the wCdI training, 21 continue to meet every month. ’ I was elected to the district council as the vice secretary. I was the first woman to hold such a position. even my husband was proud of me.’ said Hortencia, 43, shopkeeper in esperanza.

A story can be based on examples, quotes, even a poem, a photograph with text, a life story, an interview with a staff member or a village chief.

look for quantitative information for example here: • Baseline: where are VAW cases registered?

Monitor any changes/ frequency in reported cases.• The frequency of victims’ use of health services (clinics,

private practitioners, other health workers)

look for qualitative indicators for example here: • Women’s diverse views on the severity of problem.

Be specific.• Women’s attitudes and solutions. Be specific.• Power holder’s knowledge about VAW. Note down their

attitudes. Any changes over time?• What are the reactions in the community? What is

written in the media? What are the opinions of religious leaders?

33

Page 34: NF_Observing_change2010_net

Stories can illustrate the same project or result by showing several and/or contrasting views and perspectives. Stories make it possible to present voices that are not otherwise heard, including those that may be against the project. They can help make others understand a topic that is otherwise difficult to illustrate.

Stories may also be part of a baseline. A qualitative baseline may for example include a story describing the situation as it is now, and predicting a picture of what the situation might be in the future. These two stories will be complemented by similar stories over time, illustrating what has changed, and they may be important data when analyzing why change has taken place.

some challenges Story writing is a creative task, and requires

patience with the doers and users, as well as a willingness to put up with a period of trial and error. Story writing is a new and unstructured approach

Program type: ‘Capacity building of an organisation’ Baseline: Describes the organization as it was at the beginning of the program. It does not aim to answer questions about why the program was conceived, nor the justifications for the program, nor the general political or economic context at national level.

Start by narrowing down the scope of possible information:1. Whose capacity is to be strengthened? (Individuals, parts

of an organisation, the whole organisation, a network of organisations/institutions, Institutions?)

2. What is the situation today for this organisation? Be specific. Develop qualitative and quantitative indicators here. In particular consider availability of resources (financial, human, administration etc.), achievements, leadership forms.

3. What are the direct challenges the program wishes to address?

4. What capacities are to be strengthened? (Strategies, management systems, production processes, financial resources, attitudes and values, leadership?)

exA

MP

le

in PMR. Many program workers have been used to highly formalized, structural methods with matrixes and systems, and may be apprehensive of this method. Follow­up and encouragement from management is therefore essential. Program management is also responsible for making sure that time is set aside for trying out story writing as a method. Feedback loops (from the writing of the story, to comments from users, to the actual use, and feedback to producer) are time consuming but important.

As with other parts of reports and plans, there is also a chance that stories that have been collected and written will not be used in reports or publications. This is a natural part of selection in all types of information, but may demotivate staff as creative effort has been invested. With practice and better skills, the quality of the stories will improve, and the areas where stories can be used will increase. The potential of story writing as a tool for monitoring and communication is vast. (See checklist on stories)

(See checklist on baseline page 40)34

Page 35: NF_Observing_change2010_net

8. establish a baseline as soon as possible A baseline is a concrete description of what

the situation was at the time of the start of the intervention, and the context of the problem which the project/program aims to address. A useful baseline description contains relevant qualitative and quantitative indicators that can later tell us how far we have come in reaching the result during monitoring and evaluation. It does not aim to answer questions about why the program was conceived, nor the justifications for the program, nor the general political or economic context at national level.

Baseline information can be collected in different ways: from recent and relevant evaluations, surveys or research, or as a study undertaken as part of the program assessment. Information from the baseline is used throughout the program period to check whether progress is being made.

Baselines are the first measurements of the indicators. Collect only the information that the program staff is going to use, and that relates directly to the indicators that have been identified. (See checklist)

summary

NPA’s partnership strategy underlines partners’ auto­nomy over their programs and tools. Different partners and programs having different preconditions and capacities, the planning, monitor ing and reporting (PMR) methods need to reflect these differences. NPA will not introduce a top down system to be applied by all programs and partners. However, NPA requires that the content of the PMR information sent to NPA for processing satisfies a minimum standard. PMR documentation must reflect the situation ‘on the ground’, the variety among partners and target groups and the different political and social experiences. The quantitative and qualitative information documenting results must be precise, and complement, not repeat, the objectives or strategy chosen. Monitoring means keeping track of negative and positive changes in a program, in a simple and systematic way. Monitoring is done to enhance programs and learning, and should not be a job for specialists. Monitoring and documenting is not only a technical formality, but can release creativity, new information and perspectives.

Practical and transparent monitoring is a fundamental part of implementing democratic principles in pro­grams and organisations.

35

Page 36: NF_Observing_change2010_net

checklist

results

A result is: ‘The changed situation that arises for the target group/ organization/partner after the activities has taken place.’ Results of the development work can be short­term or long­term. Results are always concrete. Results can be subjective. Results can be negative.

Results are monitored in the changed situation for a group of people. Results are monitored and documented at three levels: 1) NPA supported capacity building/organizational development of partner, 2) NPA supported partner programs, and 3) NPA’s own programs.

Planning, monitoring and reporting (PMR) systems need to be ‘good enough’, not perfect. If the system itself is too advanced, it is likely that it will not be used or only used by an ‘expert’.

Results­based PMR is the responsibility of the management, and the task of everyone involved in programs.

Allow time for results. Avoid rushing, or wishing great results into existence. It may take severalyears before substantial change can be documented, especially in social programs. Reports that are based on activities and the short­term effects (output) are therefore acceptable and expected at least for the first years of a program.

Good indicators are essential in order to ensure the realistic monitoring of progress and results.

Social environments and communities differ, as do the capacity of staff, program size, type and complexity, people’s priorities, and results indicators. Standard indicators in social programs are therefore often not relevant practical or recommended. Results should be monitored using a few ‘CREAM’ indicators (see checklist indicators) from the context.

36

Page 37: NF_Observing_change2010_net

The measuring and monitoring of results must be done using quantitative and qualitative means.

Use the monitoring system you are used to, but make it simpler by keeping only the necessary parts. It takes time to get a system working as the users need to feel a certain ownership of it. A new system can work well, but can also create new problems and remove the sense of ownership.

Separate between monitoring and evaluation (M&E). Monitoring and evaluation functions are often seen together. NPA however separates the different methods in order to underline fundamental differences: Monitoring is pragmatically and systematically done by all stakeholders, while evaluations apply more rigorous standards and can be done by ‘experts’.

37

Page 38: NF_Observing_change2010_net

checklist

bAselInes

38

narrow the focus

The baseline information is used throughout the program phase to bounce off monitoring data. Therefore, concentrate on only studying the areas where you want to see results.

A baseline does not need to analyze matters at the level of the larger political picture. Note the difference between a program baseline, a political/national baseline study, or academic research. National surveys or academic research can constitute a good baseline for a development program. For most CS programs/projects we recommend mapping out the situation for limited communities/networks/organizations.

Identify the different sources of data.

Identify the most suitable method for data collection.

Discuss who will collect, who will analyze ­ the data, and at what intervals.

Balance the cost of establishing the baseline (money, time, human resources etc.) against available program resources

Discuss how the data should be presented.

Discuss who will use the data.

Page 39: NF_Observing_change2010_net

the baseline

Describe the situation for the program as result areas.

Select a few good results indicators to be followed up during the entire project. (See checklist indicators).

Describe the actual situation for people (in communities or organizations), rather than using concepts from theory or ideology.

Use participatory methods where possible.

Use and present different perspectives: women, men, leaders, members of the organization, journalists, husbands, children, and the ‘man in the street’ in the local communities before analyzing their situation.

Identify risks, analyze possible responses.

Use simple language!

Possible pitfalls with baselines

Extensive and resource consuming studies are produced, which may never be completed.

The ambition of producing mind­blowing insight gets in the way of usefulness.

3939

Page 40: NF_Observing_change2010_net

checklist

IndICAtOrs

40

Use the baseline for selecting results indicators.

Accept that measuring/describing change according to preset indicators will make negative results visible.

Decide whether to work with indicators at outcome level only (our recommendation).

Make sure the outcome statement in the plan is concrete enough for measurements to take place. Specify if necessary before choosing indicators.

Identify the main areas where measurements are essential and possible.

Avoid selecting only standard / global indicators.

Develop indicators that are concrete enough to be monitored. Make sure you can either observe, count, smell, hear, etc. the indicator at all stages of the project cycle.

Choose indicators with different perspectives and different degrees of precision (triangulation).

Criteria for selecting result indicators: ‘CreAM’

Clear: Credible, specific and crystal clear about what you mean. No general terms. No buzzwords.

relevant: Indicators are for measuring or describing how far the expected result has been met, nothing else.

economic: Monitoring indicators must not be too expensive or demanding of the available human resource.

Adequate: The indicators seen together must be good enough to measure progress. Choose the right number of indicators in relation to how reliable they are, and how essential they are.

Monitorable: It must be possible to check indicators (time, logistics, easy) and they must be simple enough to interpret in a later analysis.

Page 41: NF_Observing_change2010_net

41

Quantitative or qualitative indicators?

Quantitative Crucial for monitoring/ documenting change ­ especially during first years.

Often part of output information

To check program activities against budget

But ‘the number of activities completed’, ‘the number of people participated’, etc. are most often not enough to show where the program is heading.

Qualitative Qualitative indicators measure and describe the effect of the project/program on the particular beneficiaries/target groups(s) (outcomes).

Qualitative indicators describe how the target group experiences certain components of the activity.

Can be subjective views, observations about changes in behaviour.

Indicators should point out who benefits from or is affected by the project. Disaggregate according to gender, age or social group.

Words like empowerment, awareness, democratic, solidarity, enhance, etc. become ‘buzz words’ if no description is offered. Buzz words can never be indicators. Unwrap!

Indicators can be part of the baseline information (ideal), can be added at a later stage (practical), or both.

Indicators add information about the particular program/project –not repeat or rephrase.

Indicators should represent the exact situation in the context and the perspective of that particular organization, or target group/actors.

Page 42: NF_Observing_change2010_net

checklist IndICAtOrs

42

How and where to check on indicators:

Check daily/weekly/monthly reports.

File press reports/photographs.

Report from and record meetings.

Consider doing surveys/analyses/questionnaires.

Conduct group discussions.

Select key informants.

Make story telling a regular routine (see separate checklist).

Develop short standardized forms for recording essential quantifiable data.

File and summarize meeting reports (participants, conclusions).

Track changes.

Develop short matrixes for project visit reports, link to selected indicators, for staff at all levels.

Staff field diaries: noting particular events, comments, quotes, that are not recorded in regular monitoring systems (unexpected results).

Start by making a selection from among these, and add other sources.

Page 43: NF_Observing_change2010_net

43

Pitfalls and challenges Too many indicators are produced. Monitoring becomes practically impossible.

The information is too difficult to collect.

Possibility of bias (for example only asking those who are positive about the program in the first place).

Events other than those anticipated at the outset of the program cause more change in the indicator.

Possible solutions Keep basic records short and concise, restrict volume of information.

Limit number of indicators.

Use representative examples rather than generalizing findings.

Page 44: NF_Observing_change2010_net

checklist

lAnguAge

44

1. before you start writing:

Who are you writing for? Does your text match the reader?

Try putting yourself in the place of the reader/user: Would she/he understand your text?

Is the language you are writing in foreign to the reader? To you? If the answer is yes, show respect and keep it simple.

Be careful not to take ‘insiders’ terminology and knowledge of context and program for granted.

What does the reader need to know?

2. use simple language

Always ‘Keep It Simple and Smart’ (KISS).

Simple language prevents ambiguities and misunderstandings.

Latin, academic or specialized terminology, bureaucratic language or development buzzwords exclude many of those we would like to include.

Put people and action into the text. Avoid general, passive and impersonal phrases. Ask ‘who did what?’

example in which people and actions are clear: ‘Program staff conducted 5 training sessions with 13 council lawyers on gender based violence (gbv)’: whereas the passive, impersonal version is: ‘Awareness raising on gbv took place.’

Page 45: NF_Observing_change2010_net

45

3. short is good

If you have many things to say, say one thing at a time. Separate the different messages.

Short sentences help make the message clear. A readable sentence has no more than 22­25 words!

Therefore, use a full stop after each message or whenever possible.

4. Prove your statements (‘show - don’t tell!’)

Be specific and clear, use concrete example to show what you mean. Showing means creating ‘pictures’ in the mind of the reader.

5. when the text is finished:

Read it to yourself. ‘Hear’ if you can understand your own written message. If you think it is unclear, others will too.

If you write PMR texts in your own language, make sure the translation to English remains close to the original in meaning.

example: Meaning is told: ‘environmental awareness has been raised’ Meaning is shown: ‘People in community x have stopped chopping down whole trees for firewood’

Page 46: NF_Observing_change2010_net

checklist

MOnItOrIng

46

what is monitoring?

Monitoring is part of everyday life and part of the project planning process.

Monitoring provides regular information about the achievement of results at a particular point in time.

Monitoring signals challenges and risks to be dealt with throughout the project period.

Monitoring means following up/checking if things are going according to plan.

Monitoring is an ongoing activity that takes place at all stages of the project cycle.

Monitoring links all parts of the program/project cycle and enables adjustments to be made in a methodological way.

Monitoring processes are initiated and ‘owned’ by program managers and implementers.

Monitoring is done by program and project staff, as well as target groups.

Monitoring may not explain why problems occur or why program do not reach planned outcome. This analysis is usually dealt with through separate discussions, reviews and/or evaluations.

Monitoring systems should be flexible, and adapted to the size/capacity of the organization.

Results are monitored at these levels: activities, output, outcome.

why monitor?

To check up on program/project in a systematic and productive way.

To be accountable to beneficiaries/target group, partner organization, members and donors.

To motivate stakeholders for further action.

Page 47: NF_Observing_change2010_net

47

To improve performance.

To improve results (outcome and output).

To improve learning.

To secure ownership of results and process. (Participatory methods, stakeholders involvement in the process)

To improve communication.

who does what in monitoring?

Programs and partners decide on their own planning, monitoring and reporting (PMR) systems/methods/formats.

NPA suggests a minimum standard be presented for information presented in these formats, and provides a set of simple tools for monitoring.

All stakeholders (target groups, originations, partners, program staff and others) in the program/project should monitor progress and change according to set criteria and time intervals. Field staff (NPA and partner) are key monitors and producers of information.

while monitoring, keep in mind:

Monitoring means following up on progress and status regarding RESULTS. Results encompass all sorts of development in the program or project: positive, negative, according to plan, something entirely different, or no change at all.

Monitoring should be done according to selected indicators.

Monitoring is a job mainly done in the ‘field’, monitoring is not a desk exercise.

Page 48: NF_Observing_change2010_net

checklist MOnItOrIng

Bottom up/actor’s perspective: the information is gathered at context/field level, and analyzed at program level. In NPA’s PMR approach, monitoring shows how the project/program works in the par ticular context. Monitoring information should not become ‘top down’ reflecting the overall policy/vision.

NPA encourages partners and its own staff to use simple planning/reporting frameworks. Logical formats and plans should be used in all phases of the project. They should be designed to be used, and to fit the size, type and complexity of the program/project.

Using simple language is crucial for monitoring as well as for other communication.

Monitoring is the responsibility of program management. NPA recommends that monitoring primarily should be done by the practitioners at programs and projects level. Separate M&E units are not recommended as this removes ‘ownership’ of monitoring and of the results. External consultants should be hired only to undertake evaluations.

Indicators are the lifeblood of sound planning and monitoring. Indicators should be selected from the program context. Indicators should be developed at planning stages (baseline), and should be regularly monitored at all stages (including evaluation) of the process. (See separate checklist.)

Choose monitoring methods

Daily/weekly/monthly reports?

Filing press reports/photographs?

Reporting from and recording meetings?

Surveys/analyses/questionnaires?

Group discussions?

Key informants?

Story telling?48

Page 49: NF_Observing_change2010_net

Short standardized forms for recording quantitative data?

Meeting reports (participants, conclusions)?

Project visit and field trip reports according to chosen indicators for staff at all levels?

Staff field diaries: noting particular events, comments, quotes, that are not recorded in regular monitoring systems (unexpected results).

While selecting your methods, consider:

The indicators must be limited in number, CREAM, and always provide the point of departure for good monitoring.

Resource availability, access, needs, time constraints.

What degree of precision is needed? Strictly necessary? Balance COST+ EFFORT against TIME.

Combine different data collection strategies (triangulation).

Pitfalls and challenges Too much data produced.

Findings are aggregated, buzzwords used.

Systematic monitoring is not done, considered a desk job for ‘experts’.

Possible solutions Keep basic records short and concise. Restrict volume of information

Limit number of indicators.

Use examples rather than generalizing your findings.

Knowledge about how the project/program works is found in the context, mainly with field staff. Monitoring happens here.

Responsibility of program office to decide the tools, analyze incoming data, and ensure feedback of information. Monitoring of program progress and results must be done by field/project staff (as well as program staff at all levels incl. HO).

49

Page 50: NF_Observing_change2010_net

checklist

evAluAtIOns

50

what is an evaluation?

Part of the program cycle, and should be planned for from the start.

A periodic, often retrospective, assessment of an ongoing or completed program/ project/ organisation/ strategy.

Can cover the whole result chain from input to impact.

Can look at the relevance, performance, efficiency and impact of a piece of work with respect to its objectives, policies and strategies.

Is usually carried out at a significant stage in the program’s or project’s development, e.g. at the end of a planning period, as the project moves to a new phase, or in response to a particular critical issue.

Input Activity Output ImpactOutcomeOutcome

Page 51: NF_Observing_change2010_net

51

difference between monitoring and evaluation (same as in checklist monitoring):

Monitoring Looks mainly at activities/output, at outcome, never at impact.

Follows up activities and results in relation to preset indicators.

Continuous. Data are routinely collected.

Tracks progress against small number of predefined indicators.

Responsibility of the program management.

Monitoring is done by program or project staff.

evaluation Looks at outcome and impact. Can also look at input, output and strategy/policy.

Is based on more formal surveys, interviews, field work, information from baseline, indicators, data from previous monitoring as well as findings from other relevant evaluations. Multiple sources of data.

Episodic, ad hoc.

Questions validity and relevance of predefined indicators.

Deals with wide range of issues decided by management.

Responsibility of program management, and/or HO and/or donor.

Evaluation is done by external evaluators, possibly with internal participant in the team.

Page 52: NF_Observing_change2010_net

checklist evAluAtIOns

52

Checklist for initiating an evaluation

Define purpose of evaluation.

Involve key stakeholders.

Define the questions you want the evaluation to answer Check already available information on the topic.

Estimate cost, set a budget limit.

Formulate the TOR.

Recruit consultant(s).

Present report structure and size.

Decide roles/tasks/necessary practical assistance during the research phase. Who will be involved in the evaluation process and how will they contribute?

Decide dissemination of report.

Questions to ask in the process:

Who will use the findings of the evaluation?

What kind of information is required?

What are the policy issues that should be addressed?

What other relevant similar evaluations can be consulted? Are the recommendations and findings relevant for your evaluation purpose?

Page 53: NF_Observing_change2010_net

53

Is the necessary baseline and monitoring data available?

Are the key informants (planners, project staff, target group representatives, etc.) available?

Are the required skills and qualifications of evaluators formulated in the TOR?

Do the evaluators satisfy requirements for independence and objectivity?

Are key stakeholders invited to respond to the draft report? How will comments be incorporated into the final report?

Disseminate results of evaluation to all interested parties.

role of evaluators

The evaluator’s role is to help clarify the purpose and realism of the desired results.

Evaluators can offer conceptual and methodological options.

Evaluators interpret the situation of the program or project and can suggest ways forward. They should give constructive criticism rather than be a judge of success or failure.

Evaluators provide feedback, generate learning, suggest direction, and help develop new measures and monitoring mechanisms.

The evaluator is a facilitator who brings critical thinking to the organisation.

The evaluator should encourage an urge for more learning, not a fear of failure.

(Adapted from SIDA 2007)

Page 54: NF_Observing_change2010_net

checklist

stOrIes

54

A story can be one of several indicators and can help to illustrate or document social programswhere results might otherwise seem general or ‘intangible’.

stories as a method in planning, monitoring and reporting (PMr):

Stories easily communicate results and challenges across cultures, languages professions, countries and programs.

The collection of stories is a participatory monitoring method, and a good tool for semi­structured dialogue with project/program beneficiaries.

Stories can identify and illustrate outcomes (positive, negative and/or unexpected)

Stories can build staff capacity in analyzing data and understanding outcome/impact.

Collecting stories helps monitoring where no indicators have been pre­defined.

Stories can be used in reports and evaluations to contextualize findings.

Stories help deliver a rich picture of what is happening, rather than deliver a standard description using numbers, general terms and standard indicators.

Stories as a tool for PMR require curiosity and interest, but no technical professional skills and special techniques.

The story can be a masterpiece of literature, a quote to illustrate a point, or a photograph. A story does not need to be complete or have a happy ending.

A story can serve to start discussions about a certain approach and/or to illustrate the complexity of social change.

Choose. Be creative!

Page 55: NF_Observing_change2010_net

55

selecting a topic for a story

Anything that contributes to better understanding an approach, point of view or dilemma connected to the project/program is relevant.

The story could illustrate what the organization/partner/participants/members do.

The story could be part of monitoring: when different people are interviewed about the same thing, different views about the program/partner initiative are exposed. This is essential in monitoring, and can be an interesting topic for a story. What does the result, or the lack of a result, of an activity or approach look like when seen from different perspectives?

Observations and/or written material produced by the program or outside (reports, clippings from newspapers, etc.) can be used when selecting a topic for a story.

Take notes during a project visit/interview/meeting that can be used in a story describing the place, atmosphere, number of men/women attending, body language, expressions, etc.

Checklist for the story

‘Show, don’t tell’: describe the context, avoid buzzwords. Quote the people telling their stories as directly as possible, using their phrases.

Be sure to give the names of both the person doing the interviewing as well as the person being interviewed, the date, the place and the name of the program/project.

Take photos and, with reference to a story, the name of the person(s), place, date etc. Ask permission to use the story and photos.

If you wish to use story writing as a tool, these questions must be answered: who did or said what, when and why, and why is the story important?

Have fun!

Page 56: NF_Observing_change2010_net

INTRAC Praxis Series No 1, 2003/2008, Bakewell: ‘Sharpening the Development Process. A Practical guide to Monitoring and Evaluation.’

Rick Davies and Jess Dart: ‘The ‘Most Significant Change Technique’:http://www.mande.co.uk/docs/MSCGuide.pdf

SIDA 2007: ‘Looking back, Moving forward’ http://www.sida.se/PageFiles/3736/SIDA3753en_Looking_back.pdf

NORAD 2008: Results Management in Norwegian Development Cooperation. A practical guidehttp://www.norad.no/Results_Management_in_Norwegian_Development_Cooperation[1].pdf

NAD (Norwegian Association of Disabled) 2005/6: Handbook on Results Based Planning and Reportinghttp://www.norad.no/en/Tools+and+publications/Publications/Publication+Page?key=109837

referenCes

56

Page 57: NF_Observing_change2010_net

Your personal

nOtes

57

Page 58: NF_Observing_change2010_net

58

Page 59: NF_Observing_change2010_net

59

Page 60: NF_Observing_change2010_net

60

Page 61: NF_Observing_change2010_net

© Norwegian People’s Aid 2010Writer/editor: Kjersti Berre

Editorial assistant: Helle Berggrav HanssenIllustrations: Per Ragnar Møkleby

Design and layout: Magnolia design asPrint: Fladby as

Page 62: NF_Observing_change2010_net

POB 8844 YoungstorgetN-0028 OsloNorway

Phone +47 22 03 77 00Fax +47 22 20 08 70E-mail [email protected] www.npaid.org