+ All Categories
Home > Documents > WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ......

WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ......

Date post: 20-Oct-2019
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
91
WIPO INDEPENDENT EVALUATION GUIDELINES Evaluation Section IAOD
Transcript
Page 1: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

 

 WIPO INDEPENDENT EVALUATION GUIDELINES  

Evaluation Section 

IAOD 

Page 2: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO INDEPENDENT EVALUATION GUIDELINES  1

WIPO INDEPENDENT EVALUATION GUIDELINES

April 2010

Page 3: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

TABLE OF CONTENTS 

ACRONYMS...................................................................................................................3

INTRODUCTION ............................................................................................................4

CHAPTER 1: THE CHANGING CONTEXT OF EVALUATION................................................6 1. THE EVOLVING ROLE OF THE EVALUATION FUNCTION.............................................................6 2. HOW EVALUATION WORKS IN THE UN...............................................................................................7 3. EVALUATION IN THE UN SPECIALIZED AGENCIES AND FUNDS ..............................................8 4. THE INDEPENDENT EVALUATION FUNCTION IN WIPO..............................................................8

CHAPTER 2: INTRODUCTION TO THE EVALUATION FUNCTION ....................................11 1. DEFINING EVALUATION..........................................................................................................................11 2. THE PURPOSE OF INDEPENDENT EVALUATION..........................................................................15 3. PRINCIPLES OF WIPO INDEPENDENT EVALUATIONS...............................................................16 4. THE PROGRAM/PROJECT CYCLE AND EVALUATIONS...............................................................18 5. INDEPENDENT EVALUATION IN WIPO VS RESULTS BASED MANAGEMENT SYSTEM21 6. KEY ROLES AND RESPONSIBILITIES..................................................................................................23

CHAPTER 3: INDEPENDENT EVALUATIONS ..................................................................28 1. TYPES OF EVALUATIONS WITHIN WIPO .........................................................................................28 2. REASONS FOR INDEPENDENT EVALUATIONS ..............................................................................31 3. TIMING OF EVALUATIONS......................................................................................................................31 4. INDEPENDENT EVALUATION PRODUCTS .......................................................................................32

CHAPTER 4: APPROACH TO INDEPENDENT EVALUATIONS IN WIPO ............................36 1. STEP ONE: PLANNING AND SCOPING THE EVALUATION.........................................................36 2. STEP TWO:  ASSESSING EVALUABILITY...........................................................................................52 3. STEP THREE:  STRENGTHENING THE EVALUATION PROCESS..............................................55 4. STEP FOUR: REPORT PREPARATION.................................................................................................61 5. STEP FIVE: EVALUATION REPORT DISSEMINATION..................................................................62 6. STEP SIX: EVALUATION IN USE ............................................................................................................64

 

ANNEXES 

ANNEX 1: REFERENCES................................................................................................67

ANNEX 2: GUIDELINES GLOSSARY ...............................................................................70

ANNEX 3: EVALUATION SECTION CODE OF CONDUCT FOR EVALUATION CONSULTANTS  82

ANNEX 4: EVALUATION SECTION CODE OF CONDUCT ‐ EVALUATION CONSULTANTS AGREEMENT FORM.....................................................................................................86

ANNEX 5: EXAMPLE OF PRIMARY AND SECONDARY DATA COLLECTION METHODS ....87

ANNEX 6: EXAMPLE OF FINAL INDEPENDENT EVALUATION REPORT STRUCTURE........89

Page 4: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ACRONYMS ADG Assistant Director General CLE Country Level Evaluation DAC Development Assistance Committee Danida Danish International Development Assistance DDG Deputy Director General DFID UK’s Department of International Development DG Director General GEF Global Environmental Facility ES Evaluation Section FAO Food and Agriculture Organization of the United Nations IAEA International Atomic Energy Agency IAOD Internal Audit and Oversight Division IBRD International Bank for Reconstruction and Development ICAO International Civil Aviation Organization IFAD UN International Fond for Agriculture Development IFC International Finance Cooperation ILO International Labour Organization IMF International Monetary Fund IP Intellectual Property JIU Joint Inspection Unit ITU International Telecommunication Union LDCs Least Developed Country NGO Non-governmental Organization OECD Organization for Economic Co-operation and Development OIOS Office of Internal Oversight Services OPCW Organization for the Provision of Chemical Weapons PBD Program and Budget Document PPBME Program Planning, the Program Aspects of the Budget, the Monitoring

of Implementation and the Methods of Evaluation RBM Results Based Management ToR Terms of Reference UN United Nations UNDP United Nations Development Program UNESCO United Nations Educational, Scientific and Cultural Organization UNEG United Nations Evaluation Group UNFIP United Nations Fund for International Partnerships UNIDO United Nations Industrial Development Organization UPU Universal Postal Union USA United States of America UK United Kingdom WHO World Health Organization WIPO World Intellectual Property Organization WMO World Meteorological Organization WTO World Trade Organization WTO World Tourism Organization

Page 5: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

INTRODUCTION 1. Since the establishment of the Evaluation Policy (2007) and the Evaluation Section (2008) the development of the Evaluation Guidelines (hereinafter referred as the “Guidelines”) has been a priority for the institutional establishment of an evaluation function. As indicated in the 2007 WIPO Evaluation Policy § 22 (c) “…the Evaluation Section develops and updates, on a regular basis, evaluation strategies, procedures, methodologies and guidelines applicable, in line with developments and good practice both within and outside the UN System”. 2. The Guidelines have been developed to fulfill a dual purpose. On the one hand it should serve to guide the work of staff working under the Evaluation Section and external experts contracted by IAOD for specific evaluation assignments. On the other hand it should introduce WIPO staff and other key stakeholders to the evaluation practices applied by the Evaluation Section in all its independent evaluations in a transparent and user friendly manner. 3. These Guidelines will ensure that independent evaluations are conducted in an objective, impartial, open and participatory manner, based on empirically verified evidence that is valid and reliable with results being made available to the public, and ensuring that evaluations are carried out with due respect and regard to those being interviewed. Furthermore, the approach applied by the Evaluation Section is a realistic and utilization focused approach which takes into consideration the special WIPO context. 4. These Guidelines are based on the WIPO evaluation policy and the 2010-2015 Evaluation Strategy. The UN Evaluation Group (UNEG) Norms and Standards have inspired the preparation of the Guidelines as well as the OECD/DAC criteria for evaluation and feedback received from the WIPO staff through a survey conducted by IAOD. These Guidelines are the first of its kind for the organization. 5. The Guidelines will introduce the users to the evaluation function and will take them step-by-step through the Evaluation Section approach of designing, managing, reporting and responding to an independent evaluation. 6. The Guidelines will be reviewed no later than three years after approval of the Director, IAOD taking into account lessons learned from its implementation and international developments in the evaluation profession. 7. The Guidelines have been divided in four key chapters: 8. Chapters 1 and 2 have been developed in order to enhance a common understanding of the evaluation function. It looks at the changing context of evaluation over the years and explains how evaluation has been position within the UN System. 9. Chapter 3 provides an introduction into independent evaluations, its purpose and utility. 10. Chapter 4 guides the readers through the approach used by Evaluation Section in all its independent evaluations. 11. In the Annex Section, Evaluation Section provides a glossary of definitions used throughout the Guide as well as some tools that are used when undertaking an evaluation exercise.

Page 6: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO INDEPENDENT EVALUATION GUIDELINES  5

CHAPTER 1 

The Changing Context of Evaluation …………………………………………………………………………….  

IAOD Evaluation Section 

Page 7: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  6

CHAPTER 1: THE CHANGING CONTEXT OF EVALUATION

1. THE EVOLVING ROLE OF THE EVALUATION FUNCTION

12. UNICEF in its Evaluation Working Paper on “New Trends in Development Evaluation” describes the following:

The objective of evaluation, in the context of evaluation in international development, has been to assess an activity, initiative, project, program, strategy, policy, unit or sector (hereinafter referred to collectively as “activities”) results. According to Cracknell (1988), in the 1950s evaluation began to be implemented in US-based Organizations, focusing on appraisal rather than evaluation. Agencies were trying to design projects according to logical models, which focused on project outputs rather than outcomes and impacts. In the ’70s the Logical Framework Approach (LFA)1 was developed as a management tool.

Figure 1: Evaluation over the years

Source: Segone, 1998 (adapted by J. Flores 2010)

During the 1980s, international agencies began institutionalizing evaluation by establishing independent evaluation units mainly as an accountability tool to satisfy both public opinion and the government’s need to know about how public aid funds were used. At this stage, international organizations became more professional in carrying out evaluations focused on the long-term impact of activities. In the 90s, the trend was for agencies to internalize the meaning of and the need for the evaluation function within an organization. Agencies started recognizing the relevance and importance of evaluation as a strategic tool for knowledge acquisition and construction with the aim of facilitating decision-making and organizational learning. Nevertheless, resources allocated to evaluation units were still insufficient to meet objectives satisfactorily. Overall, emphasis is given to the evaluation process as a tool for individual and organizational understanding and learning, without overlooking the need for accountability.

In this context, participatory and empowering evaluation represents an interesting development in approach and methodology aimed at achieving different objectives. For example, Kushner (2006) suggests that the basic problem in the previous periods was that organizations and governments were learning WHAT results were being achieved, but neither HOW they were being achieved nor WHAT was being achieved that fell outside of the results matrix. Nowadays, there is a need to learn about change processes, principally so as to be able to build on the strengths of innovation and to replicate success.

1 See Chapter 4, Section 1.5 for the definition of the “Logical Framework Approach”

Page 8: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO INDEPENDENT EVALUATION GUIDELINES  7

13. More and more organizations and stakeholders2 are seeking responses to the following questions:

So what? Are we doing the right things? and Are we doing them right?

14. Most organizations have realized that they cannot provide responses to the above questions through their usual monitoring instruments, for example, their results based frameworks or their program performance reporting and that evaluations are an indispensable tool if the above questions are to be answered.

2. HOW EVALUATION WORKS IN THE UN

15. As indicated in the “UNEG Institutional Arrangements for Governance Oversight and Evaluation in the UN”:

The United Nations system consists of various entities with diverse mandates and governing structures that aim to engender principles such as global governance, consensus building, peace and security, justice and international law, non-discrimination and gender equity, sustained socio-economic development, sustainable development, fair trade, humanitarian action and crime prevention. The heterogeneity of mandates of the UN System organizations, covering normative, analytical and operational activities, combined with the requirement for evaluation to be carried out at different levels within each organization, has resulted in a diverse set of arrangements for the management, coordination and/or conduct of evaluation in the UN. In some cases, there is a dedicated evaluation entity established, in other cases, the evaluation entity is established within the organization’s oversight entity. Others have established the evaluation entity within a program management, policy, strategic planning or budgeting entity, and yet others have established it within an entity dedicated to research, learning, communications or other operational functions. A few have yet to establish any kind of evaluation capacity, but their management or operational staff may nevertheless, be involved in the conduct self-evaluations. The regulations that currently govern the evaluation of United Nations activities were promulgated on 19 April 2000 in the Secretary General’s bulletin (PPBME). Similar regulations and policies have been issued in recent years in several UN system organizations. For the autonomous organizations that are part of the UN system, each is governed by their own regulations and policies. In 2005, the heads of evaluation of 43 UN entities, under the auspices of the UN Evaluation Group (UNEG), adopted a common set of norms and standards for evaluation in the UN system.

2 Stakeholders’ definition can be found in Chapter 1, Section 4 of the Guidelines.

Page 9: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO INDEPENDENT EVALUATION GUIDELINES  8

201

0 J

ulia

Flo

res

Mar

feta

n

3. EVALUATION IN THE UN SPECIALIZED AGENCIES AND FUNDS

16. The UNEG Reference Document on “Evaluation in the UN System” provides an overview of how evaluation has been positioned within the UN specialized agencies and funds:

Most of the autonomous specialized agencies have well established evaluation offices, each with their own evaluation policy. Some of these – IFAD, IBRD, IMF, IFC and GEF, have established central evaluation offices that have a high degree of independence; i.e. the evaluation offices report directly to the Executive Boards, have heads of evaluation appointed by the Board, and have budgets approved independently by the Board. Several – ILO, FAO, UNESCO, WIPO, WHO, WMO and UNIDO, have established central evaluation offices or units with operational independence and clear evaluation policy in line with UNEG norms and standards, and reporting to the Heads of the Organization, if not directly to the Governing Bodies. There are a few without central evaluation capacity (ICAO, UPU, and IAEA) but that have evaluation capacity decentralized, or embedded, within management or operational structures. Finally there are a few that do not seem to have any established evaluation capacity – ITU, IMO, OPCW, UNFIP and UNWTO (Tourism) and WTO (Trade).

4. THE INDEPENDENT EVALUATION FUNCTION IN WIPO

17. Evaluation has been a part of WIPO’s processes since 1998 and became part of the Internal Audit and Oversight Division (IAOD) in 2000, when IAOD was established to unify the four important oversight functions of Internal Audit, Investigations, Evaluation and Inspection (see Figure 2), which in the past had been undertaken separately. In 2007, the Director General approved an Evaluation Policy for WIPO. Since then, the independent evaluation function has evolved from a programmatic function whose role was mainly to assist program managers with the preparation of their programs self-assessment reports and the design of results and indicators, to a more evaluation focused one, ensuring the independence and quality of evaluations and promoting an understanding of evaluation as an accountability and learning tool. The Evaluation Section reports directly to the Director of Internal Audit and Oversight (hereinafter referred as the “Director, IAOD”). A small evaluation team consisting of two staff has been budgeted for the IAOD Evaluation Section. The Section developed in 2009 a Medium-Term Strategic Plan for 2010-2015, in consultation with key stakeholders. The strategy is aimed to provide guidance to the Section’s staff and facilitate the operationalization of WIPO Evaluation Policy. Figure 2: Oversight Functions

Page 10: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO INDEPENDENT EVALUATION GUIDELINES  9

18. During the last two years the Evaluation Section has focused all its efforts on the establishment of a comprehensive institutional framework for the management of the independent evaluation function. This has been a crucial step in order to ensure an effective evaluation process for undertaking independent evaluations and creating a common understanding of the evaluation function. As part of the framework establishment and in order to make the Evaluation Policy operational, the Evaluation Section has developed the present Guidelines. In addition, in 2009, the Evaluation Section developed the “WIPO Self-Evaluation Guidelines” which is directed for internal use by program managers.

Page 11: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  10

CHAPTER 2 

Introduction to the Evaluation Function …………………………………………………………………………….  

IAOD Evaluation Section 

Page 12: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  11

CHAPTER 2: INTRODUCTION TO THE EVALUATION FUNCTION

1. DEFINING EVALUATION

An evaluation is an assessment, as systematic and impartial as possible, of an activity, project, program, strategy, policy, topic, theme, sector, operational area or institutional performance etc whether financed from regular budget or extra budgetary resources3 (hereinafter referred to collectively as “activities”). It focuses on expected and achieved accomplishments, examining the results chain, processes, contextual factors and causality, in order to understand achievements or the lack thereof. It aims at determining the relevance, impact, effectiveness, efficiency and sustainability of WIPO activities and contributions. Evaluation provides evidence-based information that is credible, reliable and useful, enabling the timely incorporation of findings, recommendations and lessons into the decision-making processes of the organizations of WIPO and its Member States4.

19. Over the last 50 years, the evaluation function has evolved and with that several terminologies have been associated to evaluation. Also WIPO has gone through this evolution process which has partly led to some confusion about the meaning of evaluation among its stakeholders and staff. Figure 3 provides a snapshot of the situation found in WIPO by the end of 2008. These Guidelines are partially intended to clarify the meaning of evaluation in order to differentiate it from other practices. IAOD has included as part of the Guidelines a list of common definitions that sometimes are associated to evaluation. Most definitions below have been taken from international recognized organizations like the OECD/DAC, UNEG, JIU and UNDP, and slightly adapted for WIPO. Figure 3: Which way is the right way?

3 Evaluation of extra-budgetary activities may be carried out at the request of, and in cooperation with, concerned parties. 4 This definition draws on Regulation 7.1 of Article VIII of ST/SGB/2000/8 and from the widely accepted Principles for Evaluation of the Development Assistance Committee of the Organization for Economic Cooperation and Development (OECD DAC).

2010

Jul

ia F

lore

s

Page 13: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  12

Accountability 20. Accountability relates to the obligations of organizations to act accordingly to clearly defined responsibilities, roles, performance expectations, justification of expenditures, decisions or results of the discharge of authority and official duties, including duties delegated to a subordinate unit or individual. In regard to program and project managers, the responsibility and obligation to provide a true and fair view of performance and the results of operations, as well as evidence to stakeholders that a program or project is effective and conforms with planned results, legal and fiscal requirements. In organizations that promote learning, accountability may also be measured by the extent to which managers use and ensure credible monitoring, evaluation findings and reporting. For public sector managers and policy-makers, accountability is to taxpayers/citizens. For WIPO, accountability is to its Member States. Appraisal 21. An overall assessment of the relevance, feasibility and potential sustainability of a development activity prior to a decision of funding. In international development terms, appraisal means a critical assessment of the potential value of an activity before a decision is made to implement it in order to decide whether the activity represents an appropriate use of the Organization’s resources. Audit 22. Internal auditing is an “independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes”.5 23. The purpose, authority, and responsibility of the internal audit activity are formally defined in the internal audit charter, consistent with the Definition of Internal Auditing, the Code of Ethics, and the Standards promulgated by the Institute of Internal Auditors (IIA). Inspection 24. According to the UN Joint Inspection Unit (JIU) definition, “an inspection is an independent, on-site review of the operations of organizational units to determine the extent to which they are performing as expected. An inspection examines the functioning of processes or activities to verify their effectiveness and efficiency. An inspection compares processes, activities, projects, and programs to established criteria (e.g., applicable rules and regulations, internal administrative instructions, good operational practices of other units within or outside the organization concerned), and does so in view of the resources allocated to them”. Investigation 25. “Investigation is a legal inquiry into the conduct of, or action taken by, an individual or group of individuals or a situation or occurrence resulting from accident or force of nature. An investigation pursues reports of alleged violations of rules and regulations and other establishes procedures, mismanagement, misconduct, waste of resources or abuse of authority with a view to proposing corrective management and administrative measures, as appropriate, bringing the matter to the attention of suitable legal authorities and/or internal offices of investigation. An investigation compares the subject under investigation to established criteria (e.g., rules and regulations, codes of conduct, administrative instructions and applicable law)”. JIU (1978). Monitoring 26. A continuous function undertaken by program and project staff during the implementation of an activity. Monitoring aims primarily to provide managers and main stakeholders with regular feedback and early indications of progress or lack thereof in the achievement of intended results. Monitoring tracks the actual performance or situation against what was planned or expected according to pre-determined standards. Monitoring generally involves collecting and analyzing 5 Source: The Institute of Internal Auditors

Page 14: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  13

data on implementation processes, strategies and results, and recommending corrective measures. The UNEG in its paper on “Distinctiveness of Evaluation” has highlighted the key differences between monitoring and evaluation (see table 1 below): Table 1: Distinctiveness of Monitoring and Evaluation

MONITORING EVALUATION

Continuous – Managers monitor work progress regularly. Significant deviations in implementation plan are identified. Alerts when an evaluation might be necessary to examine why deviations have occurred and what corrective actions should be taken.

Periodic – carried out at the critical stage of an activity so that improvements can be made on time and results are reported to ensure accountability and learning.

Alerts managers to issues, challenges and provides opportunities to select and decide on options.

Provides objective analysis and feedback which helps managers to decide on strategic and policy options. Answers the question of Why and how? What has changed as a result of the activity

in question? So what? Are we doing the right things? Are we doing them right?

Keeps track of the implementation progress generally at the output level.

Analyses the outcomes of implementation.

Collection of information on a routine basis to track implementation progress and to initiate corrective measures in a timely manner.

Analyzes performance information to arrive at logical conclusion on efficiency, effectiveness and impact.

Measures efficiency to determine: “Is the activity doing things right?”

Measures effectiveness and impact to determine: “Did the activity do the right things?

An internal activity, undertaken by those who have primary responsibility for the implementation of an activity.

An external activity – undertaken by those who have not had any involvement in the design or implementation of the activity, whose chief role is to objectively ascertain the effectiveness, efficiency, relevance and impact of the activity. Evaluation judgments should contain findings that are backed by evidence.

Source: UNEG (2008), adapted by J. Flores (2010). Oversight 27. Oversight refers to a key activity of governance and management of an organization, which ensures that an organization and its component units perform in compliance with legislative mandates and policy, with full accountability for its finances, as well as for the efficiency, effectiveness and impact of its work, with adherence to standards of professionalism, integrity and ethics, while adequately managing and minimizing risk. Like in many UN Organizations, the WIPO independent evaluation function has been positioned under the Internal Audit and Oversight Division (IAOD) in order to unify the four important oversight functions of Evaluation, Audit, Inspection and Investigation. Performance 28. The degree to which an activity operates according to specific criteria/standards/ guidelines or achieves results in accordance with stated goals or plans.

Page 15: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  14

Performance assessment: 29. Independent assessment or self-assessment by program, comprising outcome, program, project or individual monitoring, reviews, end-of-year reporting, end-of-project reporting, institutional assessments and/or special studies. The Program Performance Report (PPR) conducted by WIPO on a biennial basis is an example of a self-assessment performance exercise. Performance measurement 30. The collection, interpretation of, and reporting on data for performance indicators which measures how well the activity delivered outputs and contributes to achievement of higher level aims (purposes and goals). Performance measures are most useful when used for comparisons over time or among units performing similar work. Performance measurement is also system for assessing performance of development initiatives against stated goals. Also described as the process of objectively measuring how well an agency is meeting its stated goals or objectives.  Program 31. The Program Management Institute of the UK describes a program as a group of related projects managed in a coordinated way to obtain benefits and control not available from managing them individually. Programs may include elements of related work outside the scope of the discrete projects in a program. Within WIPO a program could be a group of activities, projects or services with defined date that are intended to deliver specific results that are meant to contribute to a higher objective or strategic level goal. Program Management 32. Program Management is the process of managing several related projects, often with the intention of improving an organization's performance. Program management also emphasizes the coordinating and prioritizing of resources across projects and activities, managing links between the projects and the overall costs and risks of the program in order to achieve the most effective and efficient way to deliver the desired outcomes and impacts. The program manager has to manage the negotiations between stakeholders while balancing all stakeholders’ interests at a level that is typically far wider than a project manager meets. Program managers do have the responsibility of managing project managers. Project 33. According to Nokes (2007) “a project is a temporary endeavor, having a defined beginning and end, undertaken to meet unique goals and objectives”, usually to bring about beneficial change or added value. Project Management 34. The process of leading, planning, organizing, staffing and controlling activities, people and resources in order to achieve particular program objective and outputs. Results-Based Management (RBM): 35. A management strategy or approach by which an organization ensures that its processes, products and services contribute to the achievement of clearly stated results. RBM provides a coherent framework for strategic planning and management. It is also a broad management strategy aimed at achieving important changes in the way agencies operate, with improving performance and achieving results as the central orientation, by defining realistic expected results, monitoring progress towards the achievement of expected results, integrating lessons learned into management decisions and reporting on performance. In its paper on “Distinctiveness of Evaluation” the UNEG has established the difference between RBM and Evaluation; this has been presented in these Guidelines on Chapter 1, Section 5.

Page 16: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  15

Research 36. A systematic examination designed to develop or contribute to knowledge. Review 37. An assessment of the performance of an activity periodically, or on an ad hoc basis. Evaluation is used for a more comprehensive and more in-depth assessment than “review”. Reviews tend to emphasize operational aspects. 6

2. THE PURPOSE OF INDEPENDENT EVALUATION

38. The main purpose of the independent evaluation function at WIPO is to promote and ensure substantive (rather than financial) accountability of the investments made, and to serve as a basis for learning, to improve the relevance and quality of future actions.

39. Within the specific context of the UN, the independent evaluation function at WIPO helps to ensure the accountability of WIPO, their managers and staff, to its Member States, as well as to national stakeholders (particularly national governments). At the same time, it supports reflection and learning by the Member States, management and staff, as well as national stakeholders, on the relevance, effectiveness, efficiency, impact, sustainability, coordination, coherence and coverage of WIPO activities, so as to be able to improve on them.

40. Evaluation serves this dual purpose through the provision of reliable and credible evaluative evidence and analyses. It informs Member States, the Director General, program managers, staff, and national stakeholders, on WIPO’s activities and their impact. These evaluation outputs are provided in the form of evaluation reports, briefings, various information exchanges and other evaluation products; including the act of conducting or participating in the evaluation itself. In order to be of use, they have to be provided in a timely manner, in relation to the different organizations’ program planning, budgeting, implementation and reporting cycles.

41. Because evaluation has to simultaneously support both accountability and learning at different levels of governance, oversight, management, and operations, the conduct of evaluation has to be carried out at these different levels within WIPO.

42. WIPO’s evaluation approach has been developed following internationally accepted evaluation norms and principles. 7 It also takes into account the specific features that make WIPO different from other UN agencies since it is a fee-for services based organization. In particular, there is an evolving but not yet fully effective performance system of WIPO operations and WIPO supported activities, an absence of field presence and limited resources available for activities monitoring and learning from operations. All these have implications for the independent evaluation function at WIPO. Therefore the Evaluation Section must ground its evaluation in extensive fieldwork and generate much of the evaluation-based knowledge that WIPO is required to learn from past operational experiences in order to improve future ones.

43. In addition to the above, the Evaluation Section provides guidelines and technical inputs for enhancing the capacity of WIPO operational sectors, departments, divisions, units and sections.

6 Adapted from OECD/ DAC (2002). Glossary of Key Terms in Evaluation and Results Based Management. Paris. France. 7 As set down in the UNEG Norms, Standards, Code of Conduct, Ethical Guidelines and OECD/DAC “Principles for Evaluation Development 1991 Assistance”, OECD. Paris

Page 17: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  16

3. PRINCIPLES OF WIPO INDEPENDENT EVALUATIONS 44. The evaluation principles have evolved over time and organizations like the OECD/DAC and the UNEG, as well as evaluation societies, have developed a series of principles. The following principles are followed by the Evaluation Section and applied in all its evaluation work. Usefulness 45. Proper application of the evaluation function implies that there is a clear intent to use evaluation findings. In the context of limited resources, the planning and selection of evaluation work has to be carefully done. Evaluations must be chosen and undertaken in a timely manner so that they can and do inform decision-making with relevant and timely information. Impartiality 46. Impartiality is the absence of bias in due process, methodological rigor, consideration and presentation of achievements and challenges. It also implies that the views of all stakeholders are taken into account. In the event that interested parties have different views, these are to be reflected in the evaluation analysis and reporting. 47. Impartiality increases the credibility of evaluation and reduces the bias in the data gathering, analysis, findings, conclusions and recommendations. Impartiality provides legitimacy to evaluation and reduces the potential for conflict of interest. 48. The requirement for impartiality exists at all stages of the evaluation process, including the planning of evaluation, the formulation of mandate and scope, the selection of evaluation teams, the conduct of the evaluation and the formulation of findings and recommendations. Independence 49. IAOD evaluation function has to be located independently from the other management functions so that it is free from undue influence to enable unbiased and transparent reporting. IAOD has full discretion in submitting directly its reports for consideration at the appropriate level of decision-making pertaining to the subject of evaluation. 50. The Evaluation Section staff have the independence to supervise and report on evaluations as well as to track follow-up of management’s response resulting from evaluation. 51. To avoid conflict of interest and undue pressure, evaluators need to be independent, implying that members of an evaluation team must not have been directly responsible for the policy-setting, design, or overall management of the subject of evaluation, nor expect to be in the near future. 52. Evaluators must have no vested interest and have the full freedom to conduct their evaluative work impartially, without potential negative effects on their career development. They must be able to express their opinion in a free manner. 53. The independence of the evaluation function should not impinge the access that evaluators have to information on the subject of evaluation. Quality of Evaluation 54. Each evaluation should employ design, planning and implementation processes that are inherently quality oriented, covering appropriate methodologies for data-collection, analysis and interpretation. 55. Evaluation reports must present in a complete and balanced way the evidence, findings, conclusions and recommendations. They must be brief and to the point and easy to understand.

Page 18: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  17

They must explain the methodology followed, highlight the methodological limitations of the evaluation, key concerns and evidenced-based findings, dissident views and consequent conclusions, recommendations and lessons. They must have an executive summary that encapsulates the essence of the information contained in the report, and facilitate dissemination and distillation of lessons. Competencies for Evaluation 56. Evaluators must have the skills and competencies for conducting evaluation studies and managing externally hired evaluators. Transparency and Consultation 57. Transparency and consultation with the primary stakeholders are essential features in all stages of the evaluation process. This improves the credibility and quality of the evaluation. It can facilitate consensus building and ownership of the findings, conclusions and recommendations. 58. Evaluation Terms of Reference and reports should be available to major stakeholders and be public documents. Documentation on evaluations in easily consultable and readable form should also contribute to both transparency and legitimacy. Evaluation Ethics 59. Evaluators must have personal and professional integrity. 60. Evaluators must respect the right of institutions and individuals to provide information in confidence and ensure that sensitive data cannot be traced to its source. Evaluators must take care that those involved in evaluations have a chance to examine the statements attributed to them. 61. Evaluators must be sensitive to beliefs, manners and customs of the social and cultural environments in which they work. 62. In light of the United Nations Universal Declaration of Human Rights, evaluators must be sensitive to and address issues of discrimination and gender inequality. 63. Evaluations sometimes uncover evidence of wrongdoing. Such cases must be reported discreetly to the appropriate investigative body. Also, the evaluators are not expected to evaluate the personal performance of individuals and must balance an evaluation of management functions with due consideration for this principle. Follow up to Evaluation 64. There should be a systematic follow-up on the implementation of the evaluation recommendations that have been accepted by management and/or the Governing Bodies. 65. Periodic reports on the status of the implementation of the evaluation recommendations will be made to the Director General and the Audit Committee.

Page 19: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  18

Contribution to Knowledge Building 66. Evaluation contributes to knowledge building and organizational improvement. Evaluations should be conducted and evaluation findings and recommendations presented in a manner that is easily understood by target audiences. 67. Evaluation findings and lessons drawn from evaluations should be accessible to target audiences in a user-friendly way. A repository of evaluation could be used to distil lessons that contribute to peer learning and the development of structured briefing material for the training of staff. This should be done in a way that facilitates the sharing of learning among stakeholders, including the organizations of the UN system, through a clear dissemination policy and contribution to knowledge networks.

4. THE PROGRAM/PROJECT CYCLE AND EVALUATIONS

68. The process of planning and managing projects and programs can be drawn as a cycle where each phase of the program/project leads to the next, as shown in Figure 4. Monitoring and evaluation are essential tools used as part of the program cycle. The program cycle in WIPO is two years and its programs are aimed to contribute to a six year strategic plan. 69. The program/project cycle in WIPO starts with needs identification, which develops into a Program and Budget Document (PBD). The PBD provides defines WIPO’s results-based framework which includes a description of the program, the objective, expected results and key performance indicators. This is the framework used by the Organization to report on its performance. The Program Performance Report is a self-assessment report used to inform WIPO’s stakeholders on the progress achieved against the agreed PBD. The program cycle within WIPO provides a structure to ensure that its stakeholders are consulted during the identification and design stage and relevant information is available to stakeholders in the different stages of the cycle so that relevant information is available to them in order to make informed decisions. Figure 4: Elements of the program/project cycle 20

10 J

ulia

Flo

res

Page 20: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  19

70. In the project cycle and especially in activities with a low level of complexity, evaluations are only undertaken at the end of the project cycle. However, this practice is only possible for small projects with a duration of less than one year. In many Organizations this is mostly not the case, projects tend to last more than two years and programs last even longer. Experience has demonstrated that having evaluation at the end of the cycle only contributes to the role of accountability rather than learning. The purpose of evaluation is to balance accountability and learning. This is the reason why evaluations should be undertaken at the difference stages of the project/program cycle to identify what is working well and areas with need of improvement. Depending on the timing of evaluations during the program cycle they can take the form of ex-ante, mid-term and ex-post evaluations (see Figure 4). 71. Phase 1 - Identification in WIPO involves assessing the needs of stakeholders to identify the activities or projects the various programs will focus on. During this phase consultation takes place with various stakeholders and recommendations provided, for instance by the Coordination Committee, CDIP, Program and Budget Committee or Assembly of Member States, are taken into consideration by the various programs. Program managers agree with Member States on the activities or projects to be carried out for a period of two years - the time that the program cycle lasts in WIPO. During the identification phase, program managers are also required to identify the problems that could be addressed by a project, as well as the risk to which the activity or project could be exposed. 72. Needs assessment is the most crucial part of the cycle; the projects/programs should come out of what stakeholders say they want and not from assumptions that the project/ program staff make. During the assessment different stakeholders should be consulted in order to understand how problems affect stakeholders differently. Table 2: Stakeholder Analysis

STAKEHOLDERS

Definition

Stakeholders are: People affected by the impact of an activity (activity, project, program,

strategy, policy, etc.) People who can influence the impact of an activity.

Who can be an stakeholder in WIPO

Stakeholders can be: WIPO’s Member States: WIPO's strategic direction, budget and activities are

determined by its Member States, who meet in the Assemblies, Committees and other decision making bodies.

The end-beneficiaries of WIPO’s supported activities for which the success or failure of WIPO’s activities has the most direct and long-lasting implications.

Stakeholders whose performance in managing WIPO-assisted operations and carrying out WIPO policies is evaluated by the IAOD Evaluation Section, namely:

i) WIPO operational sectors, divisions, sections, units, grouped under the various programs, and WIPO Senior Management concerned with corporate level policies and strategies;

ii) Cooperating partners;

iii) Non-governmental organizations (NGOs), civil society organizations, and other organizations that are engaged in WIPO-assisted support.

User groups – people who use the resources or services in an area. Interest groups – people who have an interest in, an opinion about, or who

can affect the use of, a resource or service. Co financiers that supplement WIPO’s resources in particular projects. Those often excluded from the decision making process.

Page 21: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  20

STAKEHOLDERS

Types of stakeholders

Primary stakeholders can be divided into two main types: Primary stakeholders who benefit from, or are affected by, an activity i.e. the

end users.

Secondary stakeholders include all other people and institutions with an interest in the resources or area being considered. They are the means by which activity objectives can be met, rather than an end in themselves.

Why is necessary to identify stakeholders

If stakeholders are not identified at the project/program planning stage, the activity is at risk of failure. This is because the activity cannot take into account the needs and aims of those who will come into contact with it.

Source: Blackman R. (2003) adapted by J. Flores (2010) 73. Phase 2 - Design involves researching in-depth on what the problems are and more importantly how these should be addressed, what the risks are, and identify the resources, which will need to be put in place. Program managers at WIPO prepare during this phase their Individual Program and Budget documents which undergo a review process before being presented to the Program and Budget Committee and the General Assembly. Research: prior to deciding on an activity to tackle a specific problem, the project or program

should assure that their work has been based on accurate, reliable and sufficient information. Research enables the Organization to find out facts about the need. This will help program staff to know how best to address it. Thorough research should look at social, technical, economic, environmental and political factors.

Problem analysis: before designing the project/program the problem will need to be

analyzed in order to identify the causes and effects of the problems stakeholders face. Logical models: once stakeholders needs have been identified, research has been

undertaken and the problem analyzed, the project/program staff can start to plan exactly how the activity will function. This can be done by using a management tool like the logical or results-based framework. A good logical model framework should define clearly the results chain, SMART indicators, baselines, targets, risk, assumptions and the means of verification. See page Chapter 4, Section 1.5 for further information on the logical models.

Program/project document proposal: the proposal usually includes the needs

assessment, the stakeholders’ analysis, the research, the risk analysis and the logic or results framework matrix and an action plan including required budget.

74. Phase 3 - Implementation and Monitoring in WIPO starts immediately after approval of the biennial Program and Budget document and involves undertaking the activities planned. During the implementation phase monitoring is encouraged by the Organization. Program managers and its staff are responsible for the monitoring and implementation activities. 75. Monitoring in WIPO is a continuous function that aims primarily to provide the management and its main stakeholders with early indications of progress, or lack thereof, in the achievement of results and adjust work plans as necessary. At the program level, monitoring is done on an annual basis and is reported to its stakeholders through its Program Performance Report (PPR). Monitoring of activities and projects is done as required and are the basis for reporting at the program level.

Page 22: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  21

76. At times there is confusion between monitoring and evaluation, which are used as interchangeable concepts but are actually different. These differences have been highlighted in Chapter 2, Section 1, Table 1 of the Guidelines. 77. Evaluations: in WIPO there are two types of evaluations that have been identified. The major difference between the two is that while independent evaluations are carried out by IAOD with the support of external evaluation experts following internationally recognized independence criteria, self-evaluations are carried out by the programs themselves. Further information on self-evaluation and independent evaluations can be found in Chapter 3 of the Guidelines. 78. Traditionally evaluations were planned only at the end of an activity and were seen as an accountability tool to report to the constituencies. As indicated in Chapter 1, evaluation has evolved in the last 30 years and so has its role. Today, the main purpose of evaluation is to balance the two roles of accountability and learning. Therefore evaluations are no longer undertaken as a one off exercise at the end of a cycle but rather undertaken at any phase of the cycle. Depending on the phase of the project or program, evaluations can take the form of ex-ante, mid-term and ex-post. 79. Knowledge Sharing and Learning: Ideally program implementers share knowledge and learn lessons during the whole program/project cycle, but it is generally during evaluation exercises that lessons and knowledge sharing could be drawn more easily from several years’ experience of implementation of an activity. The lesson learned could then inform or feed into new or existing activity.

5. INDEPENDENT EVALUATION IN WIPO VS RESULTS BASED MANAGEMENT SYSTEM

80. Evaluation and results-based management (RBM) are interlinked. WIPO’s evaluation policy raises the expectation that “evaluation is an essential contribution to managing-for-results” and contributes to the decision-making process. 81. The UN Evaluation Group describe on Page 5 of its position paper “Distinctiveness of Evaluation”, the similarities and differences between RBM and evaluation:

There is an increasingly widespread understanding that effective ways of managing-for-results require both RBM oriented measurement of results and in-depth evaluative information generated by evaluations. The use of evaluative techniques such as surveys, interviews and analysis of data generated by the program is often a pre-condition for ensuring that RBM is used as a management tool: as pointed out in recent studies, a true transition to managing-for-results is achieved when RBM is used as a management tool instead of for reporting alone. Evaluation also is a necessary adjunct to ensure the learning dimensions of RBM are fully explored. Organizations are being increasingly held accountable for organizational learning in addition to achievement of results. This trend underscores the need for organization’s to learn from both its failures and successes. This suggests that evaluation is a an important tool in promoting sound RBM system which promote managing for results, help institutionalize a culture of reflection and learning and contribute to accountability.

82. The similarities and differences between RBM and evaluation are captured in Table 3.

Page 23: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  22

Table 3: RBM and Evaluation – Distinctiveness and complementarities

RBM EVALUATION

Contribution to effective ways of managing-for-results.

Purpose (WHY)

Ensure that processes, products and services contribute to improving performance, with the latter as the central orientation.

Serves the learning and accountability purpose through the provision of reliable and credible evaluative evidence, analyses and information to Member States, the General Assembly, as well as national stakeholders and supports reflections and learning on the relevance, effectiveness, efficiency, impact and sustainability of UN activities, so as to be able to improve them.

Strong focus on accountability.

Balance between accountability and learning to improve the relevance and quality of future actions.

Function (WHAT)

Intended as a management tool but often treated as a reporting tool.

Pre-condition for ensuring that RBM is applied as a management tool: use of evaluative techniques such as surveys, interviews and data analysis.

Value for promoting culture of reflection and learning; evaluation as necessary adjunction to RBM for learning.

Use (HOW)

Performance measurement tends to use balance scorecards or benchmarks which only capture data on “whether” results were achieved or not.

Evaluation is often the sole provider of information on “WHY” and “HOW” results were achieved. It also provides answers to the questions: So what? Are we doing the right things? Are we doing them right?

Insights into the performance of specific programs, strategies, policies or cross-cutting themes with focus on criteria of effectiveness and efficiency.

Insights into the performance of specific programs, strategies, policies or cross-cutting themes, including on criteria other than effectiveness and efficiency, such as relevance, sustainability and impact.

Use of RBM as a management tool requires performance data gathered from results-based monitoring and in-depth evaluative information.

Both internal and external evaluations provide in-depth evaluative information

Source: Achim Engelhardt (2010). Adapted by J. Flores (2010)

83. According to the ILO’s recent RBM guidebook8 “the evaluation process provides a distinct, essential and complementary function to performance measurement and RBM. The evaluation function provides performance information not readily available from performance monitoring systems, in particular in-depth consideration of attribution, relevance, effectiveness and sustainability. Evaluation also brings to the performance system elements of independence of judgment. It addresses why results were or were not achieved and provides recommendations for appropriate management action. For these reasons evaluation is an essential component of RBM”.

8 ILO (2008). Results based management in the ILO: A guidebook. Geneva. Switzerland

Page 24: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  23

6. KEY ROLES AND RESPONSIBILITIES

84. According to WIPO’s Evaluation Policy, evaluation will be an integral part of WIPO’s organizational culture. There will be a firm commitment at all levels of the Organization to ensure that evaluations are effectively planned, conducted and implemented. 85. Evaluations involve a range of stakeholders. The roles and responsibilities of WIPO are specified below (see Table 4). Table 4: WIPO Key roles and responsibilities in the evaluation process

ROLES RESPONSIBILITIES

WIPO MEMBER STATES

WIPO Member States sets the enabling environment for independent evaluation with the approval of the WIPO Evaluation Policy. It exercises an oversight function over evaluation in that it: i) provides strategic guidance to the evaluation function through the General

Assembly on evaluation, with documented minutes and decisions, as appropriate;

ii) reviews the work plan and budget as set out in WIPO’s Program and Budget Document.

WIPO Member States are responsible for: i) discussing selected evaluation reports, including annual and biennial

synthesis reports, and taking decisions that guide management in its follow-up actions to the evaluation recommendations;

ii) holding management responsible for corporate, timely and substantive management responses, and for follow-up to evaluation recommendations, including changes to policies and practices warranted by evaluation reports and lessons learned; and

iii) using evaluation findings and recommendations in its decision-making.

DIRECTOR GENERAL

The Director General is responsible for safeguarding the independence of the Evaluation Section by: i) ensuring compliance with the Evaluation Policy, in particular that structural

and institutional parameters of independence are met; ii) allocating adequate resources – human and financial – to ensure the

evaluation function can be carried out professionally, with integrity and in line with the Evaluation Policy;

iii) fostering a corporate culture of accountability and learning as an enabling environment for independent evaluation and the embedding of evaluation principles into management and decision-making at WIPO; and

iv) institutionalizing a mechanism to ensure that corporate, substantive management responses to evaluation recommendations are prepared and submitted at the same time as the evaluation report is discussed by the General Assembly, follow-up actions are implemented and progress on their implementation is reported annually to the General Assembly.

PROJECT AND PROGRAM MANAGERS

Specifically, program and project managers will ensure that: i) WIPO activities are part of a results framework by recording baseline

information at the outset, defining performance indicators and setting targets for expected results;

ii) Implementation of activities are monitored, assessed and reviewed regularly on their performance. A biennial self-evaluation report should be mandatory for all operations.

iii) the WIPO Evaluation Policy, WIPO evaluation procedures, methodologies, principles and guidelines are adhered to and applied;

Page 25: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  24

ROLES RESPONSIBILITIES

iv) they are supporting independent evaluations by engaging in consultations, sharing and providing free access to the Evaluation Section staff and externally contracted evaluation consultants to all information on activities that is necessary to conduct evaluations in a comprehensive, objective and impartial manner and that they can conduct interviews as deemed necessary on activities, and facilitating the evaluation process including and participating in meetings with evaluators and giving feedback on evaluation products;

v) that independent evaluation is kept high on the agenda and to support independent evaluation throughout WIPO;

vi) data is reliable and consistent when measuring performance and reporting; vii) incentives for staff to prioritize evaluation are strengthened, including, for

example, recognition in staff performance management systems and through the management chain;

viii) they will collect, showcase and where feasible reward examples of best practice;

ix) the evaluability of WIPO programs and projects is enhanced and that programs and projects are systematically evaluated;

x) adequate monitoring and evaluation capacity exist among their staff; xi) self-evaluations are conducted according to specific procedures and in

compliance with WIPO evaluation policies and guidelines; xii) evaluation results are appropriately shared and effectively used within the

Organization. xiii) evaluation results are integrated into wider lesson learning systems in

WIPO and among their stakeholders; xiv) information and knowledge management is improved so that evidence

gathered from evaluations and other sources feed into policy and programming.

xv) there is a strong management response to findings and recommendations, and that those recommendations which are accepted are followed up and reported on.

IAOD EVALUATION SECTION

The Director, IAOD will ensure that the IAOD Evaluation Section: Manages the Section in an efficient and effective manner by: i) Developing an evaluation strategy; ii) preparing a biennial Evaluation Plan; iii) selecting evaluation topics that are relevant to WIPO’s developmental

effectiveness; iv) preparing ToRs for each independent evaluation, in full consultation with

program and project managers, and submits the ToRs to the Director, IAOD for approval;

v) preparing Annual Evaluation Reports; vi) managing the Evaluation Section budget in an efficient manner; vii) following up evaluations, tracked by the Evaluation Section and reported in

its Annual Report. Contributes to the professionalism of evaluation by: i) Having staff conducting evaluations who have a relevant educational

background, who possess qualifications and training in evaluation, as well as professional work experience;

ii) managing the work of external evaluation consultants; iii) acting as the WIPO focal point for evaluation, exchanging information and

cooperating with other UN entities and other organizations as deemed appropriate.

Page 26: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  25

ROLES RESPONSIBILITIES

Contributes to the enhancement of the evaluation culture by: i) developing, updating and publishing, on a regular basis, evaluation

strategies, procedures, methodologies and guidelines applicable to the whole Organization, in line with developments and good practice both within and outside the UN System;

ii) initiating, planning and implementing evaluation awareness-raising and capacity development activities internally at WIPO, and, when requested, assisting IP institutions in Member States to enhance their evaluation capacities.

Conducts evaluation work in an adequate manner by: i) Designing, conducting and managing independent evaluations in

accordance with the Evaluation Plan. ii) Reviewing and evaluating the adequacy of organizational

structures, systems and processes to ensure that the results are consistent with the objectives established.

iii) Assessing and evaluating the effectiveness, efficiency, relevance, sustainability, coordination, coherence, coverage and impact of WIPO’s activities, recommend and suggest better ways of achieving results, taking into account good practices and lessons learned.

iv) Assessing whether WIPO activities are producing the expected results through commissioning, carrying out and publishing independent evaluations.

v) Ensuring the quality and timeliness of evaluations produced and published by the Section.

vi) Recommending actions aimed at improving WIPO’s development effectiveness and impact based on evaluation findings. This may include commissioning periodic evaluations of the overall effectiveness of WIPO’s work, or of a substantial part of it, drawing on the results of more specific evaluations.

vii) Protecting the independence of the Evaluation Section evaluators and evaluation consultants contracted by the Section.

viii) Participating in key WIPO decision-making committees such as those reviewing new policies and activities, to help ensure that evaluation results and recommendations are adequately considered in WIPO’s major decision-making processes.

ix) Validating evaluation findings, and discussing conclusions and recommendations with the concerned program and project managers and, as appropriate, stakeholders involved in the evaluation exercise to ensure fair, factual and useful reports, prior to finalization of evaluation reports. For independent evaluations, final judgment on disputed wording will be made by the IAOD Evaluation Section.

x) Providing all evaluators with full access to existing information and data they might required for their work.

xi) Following the evaluation principles identified as part of the Policy and the Guidelines.

xii) Developing and maintaining a roster of external independent evaluators suitable for independent evaluation of WIPO’s activities.

xiii) Validating the data/evidence used to report against the Program Performance Report.

xiv) Reporting to the Director General any case of serious misconduct

Page 27: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  26

ROLES RESPONSIBILITIES

or other wrongdoing which emerges from evaluations. Contributing to the Organization’s learning by: i) Promoting and supporting best practice in development evaluation and

developing appropriate and user-friendly mechanisms for the collection, publication and dissemination of lessons learned;

ii) creating fora with internal and external stakeholders to share and discuss findings and also feeding into future decision-making;

iii) developing and maintaining a public WIPO web site dealing with evaluation.

Source: WIPO Evaluation Policy (2007)

Page 28: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  27

CHAPTER 3 

Independent Evaluations …………………………………………………………………………….  

IAOD Evaluation Section 

Page 29: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  28

CHAPTER 3: INDEPENDENT EVALUATIONS 86. There are various types of evaluations undertaken by the Evaluation Section for different uses, depending on the purpose of an evaluation. This section will provide simple definitions of the types of evaluation and their use in general. These Guidelines focuses only on the undertaking of independent evaluations and describes briefly the difference between the two types of evaluations undertaken by WIPO (independent and self-evaluations). More details on self-evaluations can be found in the “WIPO Self-Evaluation Guidelines” developed in 2009 by the IAOD Evaluation Section.

1. TYPES OF EVALUATIONS WITHIN WIPO

87. There are two types of evaluations that have been defined within the Organization. The major difference between the two is that while independent evaluations are carried out by IAOD following internationally recognized independence criteria, self-evaluations are carried out by the programs themselves.

1.1. Independent Evaluations

88. Independent Evaluations in WIPO are those designed, conducted and managed by the Evaluation Section in accordance with international independence criteria (see Box 1) and follow the UNEG evaluation principles, where possible, in collaboration with development partners and when necessary, with the support of external evaluators. Independent Evaluations in WIPO have the following characteristics in that they provide:

i) Governance arrangements that ensure independence, quality and transparency;

ii) A systematic approach, following international evaluation principles and criteria;

iii) Usefulness for policy and decision making and for public accountability;

iv) Research beyond the immediate objectives of the activity to ask why and how it works, including investigating the theory and assumptions behind the intended effects and checking for unintended effects;

v) Publication;

vi) Dissemination and stakeholder discussion for learning and wider decision making.

89. The Evaluation Section is part of WIPO’s Oversight function, and is also independent from other WIPO management functions to ensure impartial reporting. The Evaluation Section reports to the Director, IAOD who in turns reports directly to the Director General. The Director, IAOD also interacts directly with the Audit Committee, the Program and Budget Committee as well as the Assembly of Member States. 90. The WIPO Evaluation Section is committed to safeguarding the independence of evaluation to reduce biases. Independence is fundamental to ensure impartiality of evaluation throughout the selection, conduct and reporting on evaluations, and therefore contribute to the credibility, quality and use of evaluation.

91. To attain this objective, the independence of evaluation is secured by adhering to internationally accepted independence criteria:

Page 30: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  29

BOX 1: INTERNATIONAL INDEPENDENCE CRITERIA

Organizational Independence: The Evaluation Section is part of the IAOD and it performs its function independently from other WIPO Management functions to ensure impartial and independent reporting. The Director, IAOD reports directly to the Director General, the Audit Committee, the Program and Budget Committee and the General Assembly. The Evaluation Section has full discretion in establishing the evaluation work plan including the selection of subjects for evaluation, in line with the Evaluation Policy set out herein; full authority over the management of human and financial resources for evaluation; and is independence in supervising of and reporting on evaluations. The areas in which risks to the independence of evaluation exist are:

i) The planning process, where influence can bias the selection of subjects of evaluation, preventing evaluation from analyzing poor performance or directing it to highlight success stories;

ii) Funding of evaluations, which can be used to influence whether evaluations are carried out, how they are conducted and how they report their findings; and

iii) Reporting of evaluations, which if not independent, can lead to censorship of evaluation findings, conclusions and recommendations.

To prevent these risks from materializing, the Evaluation Section institutionalized the independence of evaluation in the following ways:

i) Independence in planning of evaluation. The Evaluation Section chooses subjects for evaluations in line with the established evaluation criteria and principles. The Evaluation Section prepares its work plan based on professional judgment, while consulting with stakeholders to ensure the use of evaluations.

ii) Independence of funding. The funding for IAOD is approved by the General Assembly, as part of WIPO’s Program and Budget Document, and is managed by the Director, IAOD.

Behavioral Independence: Evaluators (The Evaluation Section staff and externally contracted evaluation consultants) have to exercise personal integrity and behavioral independence. The evaluation reports are based on evidence and stakeholders are consulted at the various stages of the evaluation process. Behavioral independence shall not result in repercussions for staff in their career advancement or otherwise: managing or conducting evaluations that might lead to critical conclusions shall not be considered negatively in the performance assessment of staff or affect their prospects for promotion.

Protection from outside interference: The Evaluation Section is responsible for designing and executing the evaluation and evaluators results will not be subject to overruling or influence by any external authority. All evaluation reports are posted on the WIPO website and are accessible to the public. The Evaluation Section staff and externally contracted evaluation consultants will be protected against undue influence to enable them to express their opinions in an objective and impartial manner.

Avoidance of conflict of interest: IAOD will assure that the evaluators undertaking the evaluation will in no manner have an official, professional, personal or financial relation with any WIPO program and that he/she does not have any current or previous involvement with the development oriented activities including technical assistance or the entity being evaluated at a decision making level, or in a financial management or accounting role; or seeking employment with the Organization.

Source: Evaluation Cooperation Group (Adapted by Evaluation Section 2010)

1.2. Self-evaluations

92. Self-evaluations in WIPO are those primarily:

carried out by program managers and implementers themselves; carried out by program managers and implementers with the support of external

evaluators; or financed by the programs themselves and undertaken solely by external experts.

Page 31: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  30

93. Self-evaluation processes are used to measure the achievement of results of program activities. Self-evaluations in WIPO are also represented through its Program Performance Report (PPR) which is undertaken on an annual basis by the program managers themselves. The PPR is a critically important tool to ensure accountability and transparency of WIPO work and performance. The performance indicators and expected results are validated each biennium by the IAOD Evaluation Section.

94. Self-evaluations are also used for improving the implementation of activities. Since the Organization operates on the basis of biennial programs and budgets, it has been recommended that self-evaluations takes place at middle of the biennium for activities and projects with a clear timeline and deliverables. In the case of the Development Agenda projects, it is recommended that they be self-evaluated systematically at completion. It is not feasible to apply the international independence criteria to self-evaluations since they are carried out by those who are entrusted with the design and delivery of an activity i.e. program or project staff. However self-evaluations fulfill a very important role of enhancing learning and ownership among program staff. The Evaluation Section developed in 2009 the “Self-Evaluation Guidelines”. The guidelines were developed for program staff to assist them with the undertaken of such exercises. 95. Generally, there are some trade offs in the role between self-evaluation and independent evaluation. While independent evaluations are done mainly to enhance learning and accountability, self-evaluations are done mainly for learning purposes. Self-evaluations involve a high degree of participation especially by those who are entrusted with the design and delivery of the activity and are not necessarily undertaken for decision-making purposes. Independent evaluations are carried out by entities and persons free of the control of those responsible for the design and implementation of the development activity since its credibility depends in part on how independently it has been carried out. Independence implies that international criteria for independence (organizational independence, behavioral independence, protection from outside interference and avoidance of conflict of interest) is applied to all independent evaluation exercises. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings. Independent evaluations are mainly used as an oversight tool that contributes to the decision-making process, accountability and learning. Figure 5: The trade offs between self-evaluation and independent evaluation

201

0 J

ulia

Flo

res

Page 32: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  31

2. REASONS FOR INDEPENDENT EVALUATIONS

96. There are several reasons for undertaking independent evaluations. Among the most important reasons it has been identified that by increasing independence in evaluation, “we can decrease certain types of bias (including) ...extreme conflicts of interest…” Scriven M. (1991). 97. Independent evaluations are undertaken when there is a need to increase the credibility and reliability of the evaluation exercise, and this can be achieved through increased independence but a series of requirements are needed. As indicated in the OECD/DAC Glossary of Key Terms in Evaluation and Results Based Management, evaluation is independent when it is “carried out by entities and persons free of the control of those responsible for the design and implementation of the activity”. It also indicates “independence implies freedom from political influence and organizational pressure. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings”. 98. Overall and as highlighted by Picciotto R. (2008), “evaluation quality without independence does not assure credibility. He also notes that “independent evaluation induces credibility, protects the learning process and induces program managers and stakeholders to focus on results”. 99. Traditionally, it was understood that independent evaluations where only those undertaken by external consultants forgetting that even external evaluators were dependent on funding from program managers in charge of activities being evaluated. Therefore, a differentiation is made in WIPO between self-evaluation and independent evaluations (see Chapter 3, Section 2.1).

3. TIMING OF EVALUATIONS

100. Evaluations can take place at three different points of an activity: at the start during the planning stages, during implementation or at the end or the program or project.

3.1. Formative Evaluations

101. An evaluation intended to furnish information for guiding program improvement is called a formative evaluation (Scriven 1991). The main purpose is to help form or shape the activity to perform better (Rossi, Lipsey, and Freeman 2004:34). Formative evaluations are undertaken during the design and implementation stage of an activity to gain a better understanding of what could be or is being achieved and to identify how the activity could be improved. Formative evaluations can help to assess the need for an activity, its feasibility, evaluability, conceptualization, implementation and the process of delivery. There are two types of formative evaluations:

a. Ex-ante Evaluation (At the design stage of an activity)

Ex-ante evaluation is a process that supports the identification and design of an activity, initiative, project, program, strategy, policy, unit, organization or sector. Its purpose is to gather information and carry out analyses that help to define objectives, baselines and to ensure that these objectives can be met, that the instruments used are cost-effective and that reliable later independent evaluation will be possible.

Page 33: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  32

b. Mid-term Evaluations (During implementation of an activity)

Mid-term evaluations are usually undertaken at around the mid-point of the implementation of an activity, project, program, strategy or policy. They measure and report on performance to date and indicate adjustments that may need to be made to ensure the successful implementation of the activity, project, program, strategy or policy. These adjustments may include modifying the results-based framework. Mid-term evaluation is useful to keep data collection minimized and prioritized on the information that informs decision-making and learning.

3.2. Summative Evaluations

102. Are those evaluations carried out after implementation to asses the effects and impact of an activity – intended or unintended – it can also assess the cost-effectiveness and cost-benefit analysis of the activity.

c. Ex-post evaluation (At the end of an activity)

These are undertaken at the end of an activity. Ex-post evaluations like the other two types of evaluation focus on international evaluation criteria of relevance, effectiveness, efficiency and sustainability but they also focus on impact, coherence, coordination and coverage. Impact evaluations report on the development results achieved and focuses on the intended and unintended, positive and negative outcomes and impacts.

4. INDEPENDENT EVALUATION PRODUCTS

103. To ensure that WIPO generates knowledge and learning based on evaluative evidence that is used for better delivery to WIPO stakeholders, the Evaluation Section supports the undertaking of different types of evaluations: country, thematic, strategic and/or program evaluations; and applies a realistic and utilization-focused evaluation approach to all its independent evaluations. The different evaluations are strengthened through quality assurance mechanisms, and their results will be carefully followed up for extracting knowledge and obtaining a management response with agreed actions for improvement and learning. 104. The Evaluation Section seeks to achieve best practice in all its evaluation work by setting and following principles and quality standards set by the OECD/DAC, the United Nations Evaluation Group (UNEG) and other international evaluation bodies and networks and using internationally-agreed evaluation criteria. Strategic evaluations 105. Strategic evaluations in WIPO analyze contribution to critical areas for greater effectiveness and impact on developing a balanced and accessible international IP system. Strategic evaluations in WIPO are considered in this category when they provide knowledge on policy issues, programmatic approaches and cooperation modalities, etc9 . These evaluations may also assess the Organization’s contribution in the achievement of the strategic results to which the Organization is accountable. The WIPO MTSP, with the goals, outcomes, outputs and key performance indicators established in its results-based frameworks constitutes the overall strategic and programmatic framework of the Organization at its different levels. Strategy and policy evaluations are independent high-level assessments looking at relevance, as well as how to improve effectiveness, efficiency, and impact.

9.WIPO (2009). WIPO 2010-2015 Evaluation Strategy. IAOD Evaluation Section. Geneva. Switzerland.

Page 34: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  33

Figure 6: WIPO Medium-Term Strategic Goals 2010-2015 Source: WIPO 2009 (Adapted by Flores J. 2010) Thematic evaluations 106. Within WIPO thematic evaluations are designed to assess the effectiveness of its processes and approaches and to contribute to increasing the Organization’s knowledge on selected issues and subjects. 107. It involves the “evaluation of a selection of activities, all of which address a specific development priority that cuts across countries, regions, and sectors” (OECD/DAC, 2002). Generally themes are borne out of policy statements and often termed as ‘crosscutting issues’. Themes could be developed within a defined sector (e.g.: within Infrastructure sector, one may identify as a theme, capacity building or gender). Independent thematic evaluations have proved to be useful instruments in generating specific knowledge and recommendations at the highest level of aggregation, i.e. the policy level. 108. Independent thematic evaluations address the short-term, medium-term and long-term results of a cluster of related WIPO development oriented activities including technical assistance in a given strategic thematic area or outcome in a region or within a country. They include an assessment of the effectiveness, efficiency, sustainability and relevance of development-oriented activities including technical assistance against their own objectives, their combined contribution, and the contribution of external factors and actors. Thematic evaluations also examine non-intended effects of the development-oriented activities. In cases in which WIPO has in place a strategy for a country or a region, then an assessment of such a strategy is considered in this type of evaluations. Their findings will be used for strategic policy and programmatic decisions at the regional level, as well as strategic decisions. Country level evaluations 109. These types of evaluations provide an assessment of the performance and impact of WIPO’s supported activities in countries with a large WIPO portfolio. In particular, independent country program evaluations are expected to provide information on the most essential aspects of an activity performance and to contribute to developing strategic and operational orientation for WIPO’s future activities in individual countries.

Page 35: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

WIPO Independent Evaluation Guidelines  34

110. Country level evaluations in WIPO assess the relevance, coordination and coherence of the WIPO assistance provided to one country and its national constituents (e.g.: government institutions, private sector, communities etc). This type of evaluation is expected to generate knowledge in order to improve future assistance to the country and other national country programs. Country level evaluations are important for serving as a basis for bilateral negotiations. Program evaluations 111. Program evaluations are in depth evaluations of WIPO Programs as defined and described in the WIPO Program and Budget document. 112. Overall, independent program level evaluations assess the efficiency and effectiveness of an activity or set of activities in achieving the intended results. They also assess the relevance and sustainability of results as contributions to medium-term and longer-term results. An independent program evaluation can be invaluable for managing for results, and serves to reinforce the accountability and learning of program managers. 113. Additionally, independent program evaluations provide a basis for the evaluation of expected results and programs, and for distilling lessons from experience for learning and sharing knowledge. Ideally, independent program level evaluations should be planned at the design stage of the program. 114. Project-level evaluations (only for projects above Sfr 1 million) 115. This involves evaluation of an activity designed to achieve specific objectives within specified resources and implementation schedules; the project could be part of a broader program. Within WIPO, the concept of a ‘Project’ may not exist in many divisions or units. For example, the PCT Operations Division delivers services to its clients, and these services are not part of a predetermined discreet time-bound project with specific objectives. This situation may be the case with many divisions and units whose responsibilities are to provide services whenever those services are requested by Member States. Planning and evaluating such activities may be challenging. However, there may be discreet activities, such as improving the infrastructure of an IP office in a particular country in which case, a project approach could be useful. 116. Project evaluations are undertaken throughout the implementation cycle. The different types of project-level evaluations share the purpose of assessing implementation achievement, impact and sustainability, thus contributing to learning and ultimately to the improvement of project impact and performance. Organizational Assessments 117. These are aimed at understanding and improving performance looking at four key pillars: Effectiveness, Efficiency, Financial Sustainability and Relevance. Organizational assessments can be used as a diagnostic tool for organizations implementing an internal change or strategic planning process, or both. Organizational assessment goes beyond measuring the results of an organization’s programs, products and services. (Lusthaus C., Adrien M., Anderson C. and Carden F. 1999)

Page 36: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

CHAPTER 4 

IAOD Approach to Independent Evaluations  …………………………………………………………………………….  

IAOD Evaluation Section 

Page 37: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

CHAPTER 4: APPROACH TO INDEPENDENT EVALUATIONS IN WIPO 118. There are a number of pre-conditions, which in principle need to be met for an evaluation to become more effective. However, the reality is that evaluation pre-conditions are not always met due to several constraints including budget; time; data and politics. 119. This section will summarize the approach used by the Evaluation Section when undertaking independent evaluations and indicate some of the basic pre-conditions or requirements in order to make an evaluation more relevant and effective; however, this does not suggest that if the pre-conditions do not exist, evaluations should not be undertaken. Evaluations within the given constraints that any organization or activity might have, will still need to be conducted for reasons of accountability and learning which this report will discuss throughout. 120. The Evaluation Section will apply a six-step process approach in all its evaluations (See Figure 7). The approach proposed as part of these Guidelines has been specially tailored for WIPO and it has been developed based on a combination of realistic and utilization-focused evaluations. It factors the existing constraints and the need for having the right balance between accountability and learning. Figure 7: IAOD Evaluation Approach to Independent Evaluations

1. STEP ONE: PLANNING AND SCOPING THE EVALUATION 121. As part of the planning and scoping of the evaluation, the Evaluation Section identifies the stakeholders’ different expectations, their understanding and the purpose of the evaluation. It is crucial at this stage to understand the stakeholders’ information needs and how they expect to use the information resulting from an independent evaluation. 122. The program or project framework to be evaluated which was part of the design stage of the program or project is selected at this stage. The program or project framework includes the results chain and objectives of the activity. It also provides an overview of the risk and assumptions under which the program or project was designed as well as the external context in which the program or project was going to be implemented. Having a well-defined purpose, evaluation criteria and theory model will facilitate the development of the evaluation ToRs.

2010

Jul

ia F

lore

s

Page 38: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Figure 8: Planning and scoping the evaluation

1.1. Purpose of Independent Evaluations

123. Defining the evaluation purpose is the first step in the process; the aim of independent evaluation is to promote accountability and lesson learning. These two purposes often appear in opposition since participation and dialogue are necessary for learning, whilst independent, objective, impartial criteria followed by independent evaluations is usually considered as a precondition for accountability. Evaluation helps to account to all WIPO stakeholders as to why and how results were achieved; and how resources where used. The Evaluation Section approach to independent evaluations is to balance the twin purposes of learning and accountability by encouraging the wider participation of stakeholders wherever possible in the evaluation process, while maintaining strict impartiality in the identification of findings, conclusions and recommendations. 124. UN Agencies agreed in their Norms for the UN System (2005) that the purpose of evaluation included the understanding why and the extent to which intended and unintended (positive and negative) results are achieved, as well as their impact on stakeholders. Evaluation is an important source of evidence about the achievement of results and institutional performance. Evaluation is also an important contributor to building knowledge and to organizational learning.

1.2. Stakeholders Expectations and information needs

125. Clarification of the evaluation purpose and scope in WIPO starts by defining the information needs and the stakeholders’ expectations. WIPO stakeholders inform the Organization of their expectations through the WIPO Coordination Committee, Program and Budget Committee, Assemblies of the Member States, Development Agenda Committee, etc. Expectations of WIPO executive managers, program managers and staff in general are gathered through a biennial consultation process prior to the preparation of the Evaluation Section “Biennial Evaluation Plan”. Stakeholders may also have expectations regarding the evaluation methods to be used. However, the selected evaluation methods will depend mostly on the complexity of the activity being evaluated, the existing constraints (budget, time, data and other constraints) and the

2010

Jul

ia F

lore

s

Page 39: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

purpose of the evaluation. Throughout the consultation process the Evaluation Section ensures that the reasons for commissioning the evaluations are understood among its stakeholders. 126. Asking stakeholders, especially primary users, about the purpose of an evaluation or its intended use is the most critical and important question the Evaluation Section would ask, as most of the other aspects of the evaluation are dependant on how well this fundamental question is being addressed. It is only when the intended use is clarified for each user does the task of evaluation planning become more focused and explicit, making the process more effective. Without clearly identifying the intended uses and users of the evaluation, undertaking an evaluation would merely be a waste of time and resources. BOX 2: ASKING THE RIGHT QUESTIONS

“What do you think is the most important key to evaluation?” It is being serious, diligent and disciplined about asking the questions, over and over: “What are we really going to do with this?” “Why are we doing it?” “What purpose is it going to serve?” “How are we going to use this information?” This typically gets answered: “We are going to use the evaluation to improve the program” – without asking the more detailed questions: “What do we mean by improve the program?” “What aspects of the program are we trying to improve?” So a focus develops, driven by use”.

Michael Quinn Patton, 2002 127. The issue of identifying those primary users of the evaluation results is an important factor. According to IDRC (2004), “an evaluation user is one who has the ‘willingness’, ‘authority’, and ‘ability’ to put learning from the evaluation process or evaluation findings to work in some way”10. The intended users are generally those who are in a position to use the findings to inform their decisions or actions. In WIPO the intended users of an evaluation could be WIPO Stakeholders like the Member States, as well as the Director General, Senior Management Team, Directors, etc. These intended users are defined at the outset of an evaluation, and they are involved in clarifying the evaluation intended use or purpose and identifying priority questions, in order to ensure that the evaluation specifically addresses their values and needs, hence making the evaluation relevant. The Evaluation Section undertakes every two years as part of its planning process and preparation of ToRs for all its evaluations, a consultative process to define the purpose of future evaluations and identify the key questions since an increased involvement of primary users increases the use of the evaluation. 128. The primary intended users are different from intended audiences11, as the latter has an interest in evaluation but has a more passive relationship with it than the primary intended users. For example, if IAOD undertake an independent evaluation focusing on the program achievements of Global IP Infrastructure, the program and project managers and the ADG for

10. IDRC (2004). Paper 7 on “Evaluation Guidelines”. Canada 11. According to Scriven, an audience is a group, whether or not they are the client(s), who will or should see and may react to an evaluation (p.62). Audiences are interested in knowing about the evaluation findings as it may be useful for their own programs in the future.

Page 40: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Global Infrastructure issues will be the primary users of the evaluation findings. Other Divisions, such as Cooperation for Development Sector or the Development Agenda Coordination Division, etc. within WIPO might be interested in the evaluation topic, and the findings could be shared with them and be disseminated; however the use of the findings will not be their responsibility. Hence they are not considered as primary users, but rather intended audiences. What is important for the Evaluation Section is that the needs and expectations of the primary intended users should be considered if the findings are going to be incorporated or integrated in future plans or projects of the primary users. 129. The Evaluation Section promotes evaluative thinking among the primary intended users of evaluations, through on-going iterative discussion and feedback which at the same time should contribute to the development of an evaluative culture within the Organization. 130. The Section will define the possible intended uses of the evaluation during this stage. Overall, evaluations can have various uses. There may be a need for an independent evaluation with a stronger focus on learning and less on accountability; hence what could be required is a formative evaluation that will typically focus on improvement of planning and design of the activity. This is generally referred to an ex-ante evaluation. The evaluation could focus on the improvement of the program while it is being implemented and tend to be open-ended. This is generally referred to as mid-term evaluation. 131. An end of program evaluation, referred also as ex-post evaluations, could serve for accountability purposes, but also for drawing lessons from the activities that could be useful for learning and dissemination in order to bring improvements in similar programs or the continuation of a program. 132. The identification of intended use or purpose of the evaluation is necessary in order to clarify and select an appropriate approach and method, and to make transparent what is expected of the evaluation.

1.3. Timing of the evaluation

133. Evaluation findings should be available when decisions are being made. For example, evaluations meant essentially for learning should be undertaken (formative evaluation) ideally while the project/program is being implemented to give project/program staff the opportunity to use this learning to bring improvements in the implementation. However, if management want to know the ‘results’ of a program before making a decision whether to extend it or not, it is worth undertaking the evaluation a summative evaluation (at the end of the project/program) to draw lessons on what has or has not worked, and hence make an informed decision for the continuation of a given program/project.

1.4. Selecting the Evaluation Criteria - DAC Criteria

134. The DAC criteria are designed to promote compressive evaluations. For this reason, the criteria are complementary. For example, evaluation of effectiveness may show that objectives were met, but this does not necessarily mean that the objectives appropriate for the entire target group, were met efficiently, are sustainable, or feed into impact. Similarly, an activity by one agency can achieve good coverage, but may not be coordinated with other activities. Using the DAC criteria in combination will help to ensure that an evaluation covers all areas of the activity. All the criteria applied here and their definitions can be found in the OECD/DAC (2002) Glossary of Key Terms in Evaluation and Results Based Management.

Page 41: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

1.4.1. Relevance

135. Relevance is concerned with assessing whether the activity is in line with local needs and priorities, as well as WIPO’s mandate. Relevance is a question of usefulness or pertinence to the needs of those the program is geared to. The assessment of relevance leads to decisions whether the activity ought to continue or to be terminated. Relevance can be measured at various levels: Organizational, stakeholder and program. Relevance is also linked to the appropriateness of the activity. Figure 9: Levels of relevance

BOX 3: SOME EXAMPLES OF RELEVANCE QUESTIONS

Does WIPO conduct stakeholders’ needs assessments regularly? Are WIPO’s stakeholders (clients, Member States, end beneficiaries, etc.) satisfied with the services provided? Does WIPO regularly review the environment/context in which its activities are being implemented to adapt its strategy accordingly? Is WIPO adequately balancing stakeholders’ demands? To what extent do WIPO’s activities reflect priority IP related development issues at country levels? How do the various stakeholders perceive the objectives set by the activities? According to the stakeholders, how relevant is WIPO’s support for promoting development through IP related support?

1.4.2. Effectiveness

136. This is used to measure the extent to which the activities expected results or specific intermediate objectives have been achieved or are expected to be achieved. An activity is considered as effective when its outputs (services or products) produce the desired objectives and expected results. Assessing the effectiveness involves an analysis of the extent to which stated activity objectives are met.

Page 42: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

BOX 4: SOME EXAMPLES OF EFFECTIVENESS QUESTIONS

Effectiveness example: An objective statement might read: “Increase by 10 per cent the level of innovation transfer in 2 developing countries by the end of 2015”. When assessing the effectiveness of the activity, the Evaluation Section might ask some of the question below: Did WIPO achieve its objectives through the activity being evaluated? Did the WIPO activity achieve the desired objectives or not? What were the factors that contributed to the achievement of the stated objectives? Did WIPO involve primary stakeholders in the activity's design? What were the external factors which affected the achievement of the objectives? Did WIPO provide its services/ support in a timely manner according to the perception of its primary stakeholders?

1.4.3. Efficiency

137. Efficiency measures how inputs (i.e. expertise, time, budget, etc.) are converted into results; it expresses the relationship between outputs (services produced by an activity) and inputs (the resources put in place). An activity is considered to be efficient when it uses the least costly resources, but appropriate in order to achieve the desired outputs. In general assessing efficiency requires comparing alternative approaches which can achieve same outputs. As in the case of effectiveness, it might be easier to assess efficiency of less complex activities than for others:

BOX 5: SOME EXAMPLES OF EFFICIENCY QUESTIONS

Efficiency example: For the modernization of an IP office, or the PCT services, it may be possible to use the same measure of efficiency, whereas for an activity on improving legal frameworks, it may not be possible to have a standardized measure across countries. What are the costs of inputs relative to outputs? Would it have been more efficient to provide services through local service providers? Would it have been more efficient for WIPO to build their response on existing capacity in country or through international staff? Are agencies working with existing partners more efficient than WIPO? Are outputs produced at a reasonable cost without jeopardizing the quality of the services provided by WIPO? Are objectives achieved at the least cost? Is the activity implemented in the most efficient way compared to alternative ways?

1.4.5. Impact

138. Impact measures the effects of an activity; these effects or changes could be positive or negative, intended or unintended, on the target groups of the activity. While effectiveness focuses on the attainment of expected results of an activity, impact is a broader consequence of an activity at social, economic, political, technical or environmental levels. Impact examines the longer-term consequences of achieving or not achieving those objectives, and the issue of wider socioeconomic change. For example, effectiveness would examine whether innovation levels in

Page 43: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

country X had improved, and impact would analyze what would happen if innovation levels did or did not improve. 139. Due to the wider scope, assessment of impact may not be relevant for all evaluations, particularly those carried out during or immediately after an activity. Changes in socioeconomic and political processes may take months and in most cases years to become apparent. 140. Although it is important to make a broad assessment of impact, there are serious challenges with assessing impact. This is partly due to ‘attribution’ and the “attribution gap”. Figure 10: Impact vs. attribution

Attribution is the causal link between observed (or expected) changes and a specific activity12. It is related to the change and effect which is due to the activity, but also due to difficulties in ‘boundary judgment’, i.e. deciding what effects to select for consideration13, as effects can be numerous and varied.

Attribution gap: according to Herber and Steiner 2002, “the impact chain (utilization, effect, benefit / drawback, impact) needs time to develop, time during which the number of factors and actors as well as their interactions increases. This makes it more and more difficult to attribute a change to a single factor or program/ project.” This is called the "attribution gap". Even with costly investigations, a program/project can only narrow, but not close this gap. Realistically, a program/project can only establish a contribution and show plausible relations between its actions and changes in the context.

141. Evaluation aims to demonstrate a credible link between WIPO’s outputs and efforts in partnership with others and development changes and effects. However, causal links are easier to establish when an activity has been isolated from other activities and external factors. Nevertheless, most frequently, activities are implemented in collaboration with other institutions

12 OECD/DAC (2002). Glossary of Key Terms in Evaluation and Results Based Management. Paris. France. 13. Ministry of Foreign Affairs of Denmark, Danida (2006). Evaluation Guidelines. Denmark

2010

Jul

ia F

lore

s

Page 44: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

and in complex environments with the influence of several external factors that have affected the end-beneficiary and have influenced the results. Therefore, attributing a particular change to one activity may be difficult and in some case unethical i.e. the less complex the whole system is, the easier it would be to attribute changes and effects; and the more complex the system is the more difficult it would be to attribute changes and effects to a sole activity. In cases where the changes and effects of an activity are difficult to be attributed, evaluation will then measure the contribution of the activity.

Page 45: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

BOX 6: SOME EXAMPLES OF IMPACT QUESTIONS

Impact example: “Geographical indications are indications which identify goods as originating in the territory of a Member, or a region or locality in that territory, where a given quality, reputation or other characteristics of the good is essentially attributable to its geographical origin”.

Article 22.1 of the TRIPS Agreement When evaluating an effect of Geographical Indications (GIs), the Evaluation Section might ask the following questions: In case there was a price premium by the protection and promotion of Geographical Indications, how has this changed the life of the producers? Which groups did benefit from the activity? Which groups have been identified as the potential losers? Are producers now better off by participating in a protected collective brand or not? Have the socioeconomic conditions of the producers improved after the activity? Has the price and income in rural areas risen as a consequence of GIs? What was the activity’s overall impact and how did this compare with what was expected? Did the activity address the intended target group and what was the actual coverage? Who were the direct and indirect/wider beneficiaries of the activity? What difference has been made to the lives of those involved in the activity?

142. There are several research methods that could be used to measure the impact of an activity among them it can be mentioned: Experimental Model: this kind of

laboratory model consists of creating two groups that are equivalent to each other. One group (the program or treatment group) gets the program and the other group (comparison control group) does not. In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on. When differences are observed in the results between these two groups, then the differences must be due to the only thing that differs between them – that one got the program/treatment and the other did not.

Figure 11: Experimental model Source: IFC 2009 (Adapted by Flores J. 2010)

Quasi-Experimental Model: Looks similar to the experimental model but there is no randomization. Comparisons are made between targets who participate in the program and non-participants who are presumed similar to participants in critical ways. These techniques are called quasi-experimental because, although they use “experimental” and “control” groups, they lack randomization14.

Figure 12: Quasi-Experimental model Source: IFC 2009 (Adapted by Flores J. 2010)

14 Rossi P., Freeman H. and Lipsey M. Evaluation a Systematic Approach – 6th Edition. (1999). Sage Publications. London. New Delhi

Page 46: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Pre-post design (before and after): is a control design in which only one or few before-activity and after-activity measures are taken. Changes identified between before and after cannot be attributed to the program.

Figure 13: Pre-post design Source: IFC 2009 (Adapted by Flores J. 2010)

143. The debate on which approaches or methodologies (quantitative or/and qualitative) to use in assessing impact is on-going. Many development agencies are using a combination of counterfactual analysis (i.e. what would have happened if the activity did not happen), using control groups15, “before and after” techniques (e.g. memory recall), triangulation and other methods. The choice of one methodology against the other has to be weighted depending on various factors, for instance the availability of reliable data, the intended use of the evaluation information, availability of resources (time and budget) for the evaluation, etc. It has not been part of WIPO practice to define control groups and there has not been baseline information for many of the programs. Consequently, impact evaluations can only be undertaken under several constraints and reconstruction of data can be very costly for the Organization. Therefore, it would be useful initially to start undertaking evaluations to look at other evaluation criteria before impact.

BOX 7: ATRIBUTION VERSUS CONTRIBUTION

EXAMPLE 1: For an immunization program where vaccination can protect those immunized against such disease, there is a direct attribution to an activity. This could be analyzed through an experimental design. Whereas for a complex program, such as helping the government to improve a legal framework related to IP, the effects or changes in this particular area would be difficult to be attributed to the activity implemented or services provided by WIPO. Since there are other factors that need to be taking into consideration like the economic, political, organizational, environmental, socioeconomic and cultural factors and so on. In such a case it would only be possible to talk about the contribution of WIPO towards improving the legal framework.

EXAMPLE 2: One of the expected results of IP and Global Challenges Program is to enhance the capacity and understanding of Member States on innovation and technology management and transfer. Although the performance indicators and targets planned seem relevant (formulated at various levels: Member States, R & D institutions and other target groups)16, the support in this area may have involved various donors, other UNs, NGOs, etc and also there may be external factors that could have an influence on the expected result. It would therefore be very difficult to attribute the results solely to WIPO.

15. This involves choosing a comparison group (which is not supported by the activity) that is identical to the target group supported by the organization. 16. One of the indicators is “increased number of R & D institutions, universities and other innovation system actors in Member States that have acquired and applied practical knowledge and skills in the area of IP asset development, management and transfer”.

Page 47: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

EXAMPLE 3: WIPO provides support in IP training to trainee/students/professionals with an intended outcome of upgrading and enhancing knowledge and skills of staff of IP offices in developing countries; there is a linkage between the output and the outcome. However, it would be difficult to attribute the outcomes solely to a particular activity or a program supported by WIPO, as other external factors (including other donor support) to WIPO’s activity may also have contributed to the results.

1.4.6. Sustainability

144. This is concerned with measuring whether the Organization’s benefits are likely to continue after its funding has been withdrawn. It is also an assessment of whether the activity is likely to be used in the future, and will be maintained. For example, once an IP office has been modernized, will the IP office or the member state, specifically the department, in charge, be able to maintain it and use it in the future, and to what extent? Sustainability assesses the long-term benefits of WIPO’s support.

BOX 8: SOME EXAMPLES OF SUSTAINABILIY QUESTIONS

Is sustainability built in the design of the activity? What is the likelihood that the services provided will continue to be used? (e.g.: IP infrastructure) What is the likelihood that the services will be maintained by member states? Does the activity deserve to be sustained? Were other funds leveraged during the implementation of the activity? What were the major factors which influenced the achievement or non-achievement of sustainability of the activity?

1.4.7. Coverage

145. Evaluation of coverage involves determining who was supported by the activity, and why. It determines why certain groups were covered or not. Coverage is linked closely to effectiveness and often refers to numbers or percentages of the population to be covered by the activity. 146. Evaluation of coverage can take place at three levels: Figure 14: Coverage Levels

Source: ALNAP 2006 (Adapted by Flores J. 2010)

Page 48: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

147. At the regional and local levels, evaluators assess the extent of inclusion bias, that is inclusion of those in the groups receiving support who should not have been (disaggregated by sex, socioeconomic grouping and ethnicity); as well as the extent of exclusion bias, that is exclusion of groups who should have been covered but were not (disaggregated by sex, socioeconomic grouping and ethnicity).

BOX 9: SOME EXAMPLES OF COVERAGE QUESTIONS

Coverage example: One of the strategies of a WIPO program on Traditional Knowledge, Traditional Cultural Expressions and Genetic Resources is to have initiatives aimed at enhancing the effective participation of representatives of indigenous and local communities in WIPO’s work, such as the WIPO Voluntary Fund and the WIPO Indigenous IP Law Fellowship Program. In this instance, assessment which looks at the differential ‘impact’ or effect an activity could have on specific categories of local communities, such as women, indigenous people and others, which could be affected by this Program, would be necessary. What are the costs of inputs relative to outputs? What were the main reasons that the activity provided or failed to provide major population groups with support proportionate to their need? What are the differential effects of the WIPO program activity on the various groups (disaggregated by sex, socioeconomic grouping and ethnicity)? To what extent does the activity take into account issues of equity (e.g.: gender, disability)? To what extent the program has enhanced the capacity of the various groups within the communities in order to have their effective participation in WIPO’s work?

1.4.8. Coherence

148. The needs to assess and to ensure that there is consistency within WIPO’s activities and policies, as well as between WIPO policies and that of national and international policies related to IP. As assessment of coherence focuses on policy level, it should look at the policies of different actors, and whether WIPO’s policies are complementary or contradictory to those of other actors. This criterion would be very useful in particular while conducting Strategy and Policy evaluations by IAOD. Evaluating coherence is of particular importance when there are a number of actors providing support, as they may have conflicting mandates and interests.

BOX 10: SOME EXAMPLES OF COHERENCE QUESTIONS

Why was coherence lacking or present? What were particular political factors that led to coherence or its lack? Should there be coherence at all? How do WIPO’s policies compliment those of the other actors who work in the same thematic/sector policy area? To what extent is WIPO working coherently with the Country National Strategy Plans and not being reactive to member states requests in its service delivery (piece meal approach versus more coherent approach)?

Page 49: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

1.4.9. Coordination

149. Coordination is not a formal DAC criterion but is important to consider coordination in all WIPO evaluations. Coordination is “the systematic use of policy instruments to deliver support in a cohesive and effective manner. Such instruments include strategic planning, gathering data and managing information, mobilizing resources and ensuring accountability, orchestrating a functional division of labor, negotiating and maintaining a serviceable framework with host political authorities and providing leadership” (Minear et al, 1992). 150. Whereas coherence focuses on whether policies of different actors are in line with each other, coordination focuses more on the practical effects of actions of governments and agencies – for example, whether they join cluster groups, whether they discuss geographical targeting, and the extent to which information is shared. 151. Activity of a single institution cannot be evaluated in isolation from what others are doing, particularly as what may seem appropriate from the point of view of a single actor, may not be appropriate from the point of view of the system as a whole. Evaluating coordination includes assessing both harmonization with other aid agencies and alignment with country priorities and systems.

BOX 11: SOME EXAMPLES OF CO-ORDINATION QUESTIONS

Coherence example: WIPO is working together with other agencies in issues related to public health specially those related to HIV/AIDS, malaria, tuberculosis and other diseases which are continuing to create major problems in many parts of the world. Agencies like the WHO, UNAIDS, The World Bank and many donor agencies, NGOs and the affected countries themselves are investing a lot of effort and resources into promoting health care and innovation and making innovative technology accessible, particularly where it is urgently needed. In order to access coordination of such an activity the Evaluation Section might assess the following questions: Does WIPO have plans for coordination in place, and are those followed? Were there any incentives to coordinate, for example did donors promote UN coordination trough funding arrangements? Or was there competition for funds? Was a lead agency appointed, what was the result of this? Which parties did WIPO include and where were they included and in what manner? Why? Did WIPO take the lead in IP issues and how effective was it? Why? Were funds WIPO funds channeled in a coordinated fashion, or individually by each agency to suit their own strategic aims? Which key agencies does WIPO coordinate with, and in which geographical/sector/ areas? To what extent does coordination or lack of it affect WIPO service delivery and achievement of its goals?

1.5. Developing a Program Theory Model

152. A well-designed and coherent program or project model constitutes a solid foundation for an evaluation, as this explains the rationale, the objectives, expected results (impact, outcomes and outputs) and its related indicators and activities. Program theory is occasionally spelled out in a program or project document (e.g. the logical framework or results framework). In the absence

Page 50: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

of such a model the evaluator, through consultations with the program staff and other stakeholders, will need to develop the Program theory before the evaluation starts.

BOX 12: WHAT DESCRIBES THE PROGRAM THEORY MODEL?

How the activity was designed The inputs needed for implementation (operational and non-operational resources) The implementation process The results chain (outputs, outcomes and impact) Provide an explanation of how the program or project benefits will continue after the program or project has been completed i.e. explanation about the sustainability. An overview of the context in which the program or project was being implemented.

Examples of program theory models Results-based framework 153. The program theory model used by WIPO’s programs is based on a results-based framework approach. The framework is used to define the program objectives, the strategies to be implemented, the expected results and key performance indicators. Occasionally, program staff members do highlight as part of this document the context in which the program will be implemented and some of the constraints they might encounter. 154. Evaluation intends to assess the extent to which an activity’s objectives are being achieved, the extent to which its strategy has proved to be effective, and whether it will effect change and have an impact etc. In order to enable a program to measure or ascertain whether an expected result has occurred, it is necessary for an activity to set up a program logic model during the planning phase, as the objectives, results and indicators will enable measurement of the performance level of an activity. It is also on the basis of the indicators that evaluators generally assess whether an expected result has occurred or not and the project’s objectives are being achieved. 155. In the absence of good design from the outset which clearly specifies what the project objectives are, its expected results, and key performance indicators, it makes it more difficult for an evaluation to assess a program’s performance or achievement of its objectives. The absence of an explicit activity logic and a good design from the outset would impose limitations for evaluation. It is the responsibility of those who will evaluate the project to understand fully the activity logic and design, through consultations with those who have been managing the project during the planning phase of an evaluation. Logical Framework 156. The Logical Framework Approach (LFA) is methodology that logically relates to the main elements in program and project design and helps ensure that the activity is likely to achieve measurable results. The methodology is mainly used in the design, monitoring and evaluation of international development activities. As part of this methodology a logical framework matrix is developed. 157. In the logical framework matrix, a project/program would have an internal logic between the various hierarchy of objectives or the cause and effect relationship between the various elements i.e. ensure the consistency among impact, outcomes, outputs, activities and inputs, as well as indicators, targets, milestones, baselines, assumptions and risks. The approach helps to identify

Page 51: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

strategic elements (inputs, outputs, purposes and goal) of a program, their causal relationships, and the external factors that may influence success or failure of the program. 158. “The logical framework, or Logframe as it is called, was developed in the 1970’s as a method for programming and evaluating development programs and projects. Subsequently most international organizations, including the United Nations, have adopted it to help guide results-based management. The Logframe involves structured thinking -- starting with problems addressed, defining desirable end-states and the conditions that have to be met to obtain them and determining the outputs, activities and resources necessary to achieve them”. 17 Figure 15: Hierarchy of objectives for less complex activities Source: IFAD (2002) – Managing for Impact in Rural Development: A Guide for Project M&E (adapted by Flores J.) 159. As a reference some of the key definitions used as part of a logical model, have been compiled in a glossary, which is available in Annex 2 of the Guidelines. All definitions referring to the logical model, found in the glossary were extracted and adapted from the Office of Internal Oversight Services (OIOS) of the UN, OECD/DAC (2002), IFAD (2002) and DFID (2005, 2009).

17 UN Office for Internal Oversight Services (OIOS). Course on Program Performance Assessment in Results based Management.

Page 52: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Figure 16: Results chain based on example of the WIPO Academy Source: WIPO Academy (Adapted by Flores J. 2010)

Page 53: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

1.6. Preparing the Terms of Reference (ToRs)

160. The ToRs present an overview of requirements and expectations of an evaluation. TORs explain why the evaluation is to be undertaken and for whom, what it plans to achieve, how the evaluation will be conducted, who will be involved, when the milestones should be reached and the evaluation completed; i.e. it should include:

BOX 13: CHECK LIST FOR FOR EVALUATIONS TORs

Background information Rationale for the evaluation Intended use and users The issue to be evaluated, and the questions which need to be answered Principles and approach Evaluation methodology The roles and responsibilities of all those involved in the evaluation process The reporting requirements and report outline Estimation of the cost of the evaluation; and Timeline and milestones.

161. The Evaluation Section is responsible for drafting ToRs for all independent evaluations undertaken within WIPO. The Section undertakes a consultation process for identifying the evaluation needs every two years, identifying the purpose of the different exercises and the relevant questions for which the intended users are expecting to have answers. The wide consultation exercise is done on a biennial basis and is undertaken in line with the preparation of the Evaluation Section “Biennial Evaluation Plan”. Results of the consultations are taken into consideration during the preparation of the Evaluation Plan. On this basis, draft ToRs are prepared and circulated for comments by the Director General, the concern program/project managers and other stakeholders if required.

2. STEP TWO: ASSESSING EVALUABILITY

162. Once the draft ToRs have been prepared, the Evaluation Section proceeds to assess the evaluability and identification of any possible constraints like data, budget, time or political constraints that could limit the undertaking of the evaluation exercise. The process of reviewing the activity logic facilitates the preparedness of the Evaluation Section to deal with any future risks and to adapt the ToRs as necessary prior to starting the evaluation. Figure 17: Step 2 – Assessing the Evaluability

2010

Jul

ia F

lore

s

Page 54: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

163. According to Bamberger M. (2006), there are four kinds of constraints that could be mentioned:

(a) Budget constraints: this is the case when funds for the evaluation were not included in the original program or project budget, and the evaluation must be conducted with a smaller budget than would normally be allocated. As a result it might not be possible to collect the desirable data or to reconstruct baseline or comparison group data. Lack of funds for evaluations may create or exacerbate time constraints because evaluators may not be able to spend as much time in the field as they consider necessary.

BOX 14: WHAT DOES IAOD DO IN CASES OF BUDGET CONSTRAINTS?

Simplify the evaluation design Prioritize data needs with the stakeholders to eliminate the collection of non- essential data Make use of reliable secondary data Reduce the sample size of analysis Reduce the cost of data collection, input and analysis

(b) Time constraints: happen when the evaluator is not called until the project is

already well advanced and the evaluation has to be conducted within a much shorter period of time than the evaluator considers necessary.

BOX 15: WHAT DOES IAOD DO IN CASES OF TIME CONSTRAINTS?

Simplify the evaluation design: Prioritize data needs with the stakeholders to eliminate the collection of non- essential data Make use of reliable secondary data Reduce the sample size of analysis Make use of rapid collection methods Use highly experience international external consultants in the most efficient and effective manner Hire more people in order to reduce the time for instance for data collection Build outcome indicators into program/project records Use modern data collection and analysis technology

(c) Data constraints: In WIPO there is little experience that has been cumulated

through evaluations. Consequently in most programs or projects there is little or no comparable baseline information available on the conditions of the target groups before the programs or projects started. Even in cases where program or project records are available, they might be often not organized in the form needed for comparative before-and-after analysis. Program records and other documentary data often suffer from reporting biases or poor record-keeping standards. Even when secondary data is available, this is sometimes incomplete. Even in cases where monitoring has been done on a routine basis and data has been gathered this might not always relate to the reality of the program or the initial agreed results framework.

Page 55: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

BOX 16: WHAT DOES IAOD DO IN CASES OF DATA CONSTRAINTS?

Reconstruct baseline data for program/project populations: Special challenges in working with comparison groups

o Identifying and constructing comparison groups o Problems of nonequivalent comparison groups

Collecting data on sensitive topics or from groups who are difficult to reach Collecting data on difficult-to-reach groups

(d) Political influences: does not refer only to pressures from government agencies and

politicians but also includes the requirements of funding agencies, pressures from stakeholders, and differences of opinion within an evaluation team regarding the evaluation approached and methods. Evaluations are frequently conducted in contexts where political and ethical issues affect design and use. All programs affect some portion of the public, and most programs consume funds, always limited and often scarce. Decisions based on evaluation results may intensify competition for funding, expand or terminate programs, or advance the agenda or politically oriented groups.

BOX 17: WHAT DOES IAOD DO IN CASES OF POLITICAL CONSTRAINTS?

Addressing political constraints during the evaluation design:

o By understanding the political environment o Conducting stakeholder analysis o Participatory planning and consultation

Addressing political constraints during the evaluation:

o Ensuring access to information during the implementation of the evaluation o Providing feedback to allay suspicion and demonstrate the value of the

evaluation

Addressing political constraints in the presentation and use of evaluation findings: o Ensuring that the findings are of direct practical utility to the different

stakeholders 164. The Evaluation Section is aware of and experienced in managing such constraints. Therefore, all evaluations undertaken by the Evaluation Section will be drawn from a wide range of evaluation approaches and methods to address the various types of constraints. Furthermore, it should be noted that despite all the above-mentioned constraints, demand from policymakers, and civil society, for information regarding goal achievement and impact of activities is increasing - even in WIPO. Unfortunately, enthusiasm for information about impact is not necessarily matched by adequacy of evaluation resources. Consequently, evaluators are frequently asked to produce methodologically robust impact evaluations under circumstances in which it is impossible to comply fully with conventional evaluation standards.

Page 56: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

3. STEP THREE: STRENGTHENING THE EVALUATION PROCESS Figure 18: Strengthening the Evaluation Process 165. During this phase the Evaluation Section will undertake the selection of an evaluation team and provide all the support which is necessary to make the evaluation useful. As part of the process the Evaluation Section will provide the evaluation team with a “Learning Resource Group” that will advice the evaluation team as required during the evaluation exercise. The Evaluation Manager will communicate with the “Learning Resources Group” and facilitate the process between the evaluation team and the “Learning Resource Group”. 166. This is a phase where the evaluation team should prepare a detailed operational plan, i.e. the inception report or Work Plan. This phase helps to focus the evaluation as much as possible on the important or essential questions. The inception phase would include identifying data required for the evaluation questions set out in the ToRs (the latter could be refined during this stage), determining how information will be collected, when and from where, designing data collection instruments, and drafting the inception report. 167. The Evaluation Team will need to develop a work plan allocating responsibilities, time and tasks for each evaluation member. The Evaluation Team together with the Evaluation Manager and the “Learning Resource Group” will identify threats to the validity of the evaluation findings, conclusions and possible recommendations. As part of this stage appropriate data collection tools and methods will be identified and analyzed.

3.1. Selecting an evaluation team

168. This will depend on whether or not what is to be evaluated will require external consultants. It is generally the responsibility of IAOD to make that decision, depending on the complexity of the subject to be evaluated, the resources which could be available for such an evaluation, and the timing. Should the Evaluation Section decided to use external consultant(s), the procedures put in place by WIPO for identification, and selection and hiring of consultants will be followed. Consultant(s) will be selected according to the requirement and criteria set out in the terms of reference or the specification of the assignment.

2010

Jul

ia F

lore

s

Page 57: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

169. Furthermore, all external experts working for the Evaluation Section will be requested to sign the IAOD Code of Conduct which has been developed based on “UNEG Code of Conduct for Evaluation in the UN System” and the “UNEG Ethical Guidelines for Evaluations”. The Code of Conduct applies to all consultants working for the IAOD Evaluation Section. The principles behind the Code of Conduct are consistent with the Standards for the International Civil Service by which all UN staff are bound. The Evaluation Section Code of Conduct applies to all stages of the evaluation process. The Code of Conduct and the commitment, which consultants should sign in writing, can be found in Annex 4 of the Guidelines.

3.2. Setting up a learning resource group

170. Depending on the size of the evaluation and its complexity, the Evaluation Section will constitute a ”Learning Resource Group” with a designated person, usually the Evaluation Manager, to follow the independent evaluation process at regular intervals. 171. Such a group increases the ownership of different partners and stakeholders. The evaluation team is provided closely with institutional knowledge as and when necessary and is guided through difficult stages of the evaluation process when decisions need to be taken. This kind of support might help the evaluators to keep focus on the ToRs. 172. The common key stages where the support of the “Learning Resource Group” is required are during the development of the ToRs, preparation of the inception or draft report and final presentation of evaluation results.

3.3. Defining Evaluation Questions and Assessing its Evaluability

173. According to Evalsed18, defining evaluation questions is an essential part of the start-up of any evaluation exercise. Evaluation questions can be defined at different levels. They can be: Descriptive questions intended to observe, describe and measure changes (what

happened?)

Causal questions which strive to understand and assess relations of cause and effect (how and to what extent is that which occurred attributable to the program?)

Normative questions which apply evaluation criteria (are the results and impacts

satisfactory in relation to targets, goals, etc?) Predictive questions, which attempt to anticipate what will happen as a result of planned

activities (will the measures to counter unemployment in this territory create negative effects for the environment or existing employers?)

Critical questions, which are intended to support change often from a value-committed

stance (how can equal opportunity policies be better accepted by SMEs? or, what are the effective strategies to reduce social exclusion?)

174. Ideally, evaluation questions should have the following qualities: The question must correspond to a real need for information, understanding or

identification of new solution. If a question is only of interest in terms of new knowledge, without an immediate input into decision-making or public debate, it is more a matter of

18 European Commission. Regional Policy – Inforegio. Evalsed: The resource for the evaluation of socio-economic development. http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/guide/index_en.htm

Page 58: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

scientific research and should not be included in an evaluation.

The question concerns an impact, a group of impacts, a result or a need. That is to say, it concerns, at least partly, elements outside the program, notably its beneficiaries or its economic and social environment. If a question concerns only the internal management of resources and outputs, it can probably be treated more efficiently in the course of monitoring or audit.

The question concerns only one judgment criterion. The quality of an evaluation question

may sometimes be difficult to achieve, but experience has shown that it is a key factor in the usefulness of the evaluation. Without judgment criteria clearly stated from the outset, the final evaluation report rarely provides conclusions.

175. The key evaluation questions should have corresponding indicators or assessment criteria identified; the next step will then be to identify the data requirement for the indicators or assessment criteria. Indicators also describe in detail the information required to answering evaluation questions. In most cases, as indicators do not exist at the beginning of an activity within WIPO, identifying assessment criteria may be critical. Setting assessment criteria helps to indicate whether the program has been a success or not in a particular area. In some cases there may be indicators, but they may not be relevant; hence the role of the evaluation team is to review them to ascertain their relevance and appropriateness. 176. An example is provided in Box 18 to demonstrate the relationship between evaluation questions and assessment criteria/ indicators.

BOX 18: TOOL FOR DEFINING EVALUATION QUESTIONS

Evaluation criteria

Key question Indicator/Assessment criteria

Source of data Methods of data collection

Effectiveness Building Capacity To what extent has the program developed or expanded the capacity of its staff to use appropriate evaluation methodologies and critical thinking?

Indicator/ assessment criteria Program staff are able to apply/ use new evaluation methodologies (e.g. participatory approaches, cost/benefit analysis) Greater quality of reports produced by the program staff (in terms of analysis and critical thinking)

All relevant program documents inc., annual work plans, technical reports, etc. Stakeholders such as program managers and officers, senior management.

Desk review Survey all regional managers and project managers. Semi-structured interviews, key informant interviews, focus groups.

177. Finally it is noteworthy that not all questions that evaluation commissioners and program managers ask are suitable to be evaluation questions. Some are too complex, long term and require data that is not available. Other questions do not even require evaluation but can be addressed through existing monitoring systems, consulting managers or referring to audit or other control systems. 178. Therefore, once the evaluative questions have been identified, their evaluability has to be considered. A prior assessment has to be made of whether the evaluative questions are likely to be answerable, given available data. Will the evaluation team, with the available time and resources and using appropriate evaluation tools, be able to provide credible answers to the questions asked? This requires an evaluability study to be carried out.

Page 59: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

179. For each evaluative question one needs to check, even very briefly: whether the concepts are stable, whether explanatory hypotheses can be formulated, whether available data can be used to answer the question, without any further

investigation, whether access to the field will pose major problems.

180. Figure 19 provides an overview of the steps when selecting priority evaluation question indicators, important considerations at the evaluability stage are the probabilities that evaluation results will be obtained and used. Figure 19: Selection of priority questions Source: Evalsed Guide (Adapted by Flores J., 2010) 181. Once required information is identified, it would be easier to identify where it will be collected and when. It is important to identify what data is already available, and then identify new information required to be collected from a range of different sources (this is discussed in detail in the data collection sub-section).

3.4. Defining the evaluation methodology and identifying the tools for data collection

182. “The methodology of an evaluation is usually composed of a combination of tools and techniques assembled and implemented in order to provide answers to the questions posed within the framework of an evaluation, with due regard for the time, budgetary and data constraints of the evaluation activity”.19 Once the data requirement and the sources of information are clearly identified, the next step is to identify and develop data collection methods and tools. The various methods and tools which could be used for collecting data for evaluation are presented in Annex 5 of the Guidelines. For the purpose of data collection most evaluations use a mix of primary and secondary data.

19. WIPO (2009). Self evaluation guidelines. IAOD Evaluation Section.

Page 60: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

183. Primary data consists of information evaluators can observe or collect directly from WIPO stakeholders about their first-hand experience with the activity. This data is collected through the use of surveys, meetings, focus groups discussions, interviews or other methods that involve direct contact with the respondents. It can facilitate deeper understanding of observed changes and the factors that contribute to change. 184. The Evaluation Section ensures that the data collection methods selected, are the appropriate ones, as the evaluation may produce controversial results in particular when their reliability and credibility of data is questioned. It is therefore critical that this process is well managed to ensure good quality, reliable and relevant evaluation results, which meet its purpose. 185. Another way of ensuring data reliability and credibility is to ensure that the code of conduct for the evaluation team drafted by IAOD is observed, as this could have some bearing on data quality. If interviewees feel confident and at ease with the evaluators or enumerators, they will provide better and more reliable information. 186. Secondary data, by contrast, is existing data that has been, or will be collected by WIPO or others for different purposes. Secondary data can take many forms but usually consists of documentary evidence that has direct relevance for the purpose of the evaluation: nationally and internationally published reports, economic indicators, project or program plans, monitoring reports, previous reviews, evaluation and other records, country assistance plans and research reports. 187. Within WIPO, there is not always adequate baseline data for programs/projects; where project or activities records are available, they may not be organized in a way they could easily be tracked or used. The collection of secondary data could therefore be quite challenging, requiring time and resources. This is something that the Evaluation Section will factor in all its evaluations when planning an evaluation, in particular preparing TORs. 188. During this phase, the evaluation team will select specific techniques and instruments for collecting the data required, which will answer the evaluations questions which are generally put in an evaluation framework. The type of methods to be used, whether qualitative or quantitative will be dependent on the type of information required to be collected, the resources and the time available for an evaluation, the context and the availability of data and other variables. For example, for thematic evaluations covering many countries, it may be useful to use case studies. Semi structured interviews could be used to collect feedback on a service provided by WIPO from the beneficiaries or users of the service, statistical surveys could be used for performance assessment of a project where quantitative data is required. 189. Often evaluators start with a desk study as this helps to develop or refine the evaluation questions which are set out in the TORs, identifying main stakeholders to interview, preparing data collection instruments (e.g.: questionnaire, checklist or other), and summarizing relevant information from the various project reports, studies etc. Once the existing data is identified, it is easier to identify the additional data required which need to be collected using various data collection methods/tools. 190. The various data collection methods that could be used depending on the type of an activity are presented in Annex 5. The Evaluation Section ensures that a combination of data collection methods are used as this guarantees data accuracy. ‘Triangulation’ is a common best practice, and this involves analyzing an issue using several tools (surveys, semi-structured interviews, direct observation etc.) and collecting data from various sources. The aim is to cross check the information and validate the results.

Page 61: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

3.5. Data analysis

191. The technique of triangulation facilitates data analysis and allows evaluators to overcome the bias that comes from single information sources, the use of single methods or single observations. Hence triangulation strengthens the accuracy, robustness and reliability of evaluation results. 192. The OECD/DAC defines triangulation as “The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment.”20 For the Evaluation Section the use of triangulation is a natural application due to its tripartite approach. 193. This phase overlaps with data collection phase as data is sometimes analyzed while the data collection is ongoing. The first step in the analytical process is to prepare or organize data according to evaluation questions and assessment criteria. The next step is to aggregate the data and generate findings that relate and are relevant to the evaluation questions. The final step in the analytical process is to interpret the findings or make judgment in relation to assessment criteria, indicators, targets and benchmark etc, and draw conclusions, i.e. findings are compared against targets and other variables. 194. Findings should need to be supported by evidence. The type of evidence ranges from observed fact (i.e. factual evidence) to reported statement (i.e. indirect statement). The stronger the evidence, the stronger the validity of the findings. Findings should respond and answer the evaluation criteria questions detailed in the work plan/inception report using the data collection methods described in the methodology section of the report. 195. Conclusions and lessons are drawn from interpreting the findings and this is the stage where the evaluation team makes judgment about the achievement of a program using the evaluation questions prepared under each evaluation criteria, such as relevance, effectiveness, efficiency, sustainability, impact, coherence, coordination, coverage. Conclusions should provide clear answers to the evaluation questions that have been the basis of the evaluation. 196. Conclusions that are lessons learned having the potential to be transferred to the same or other similar activity in a different area or context; recommendations aim at improving the activity through provision of more specific advice on which areas need improvement or change. 197. Recommendations should derive from conclusions and be presented or formulated in a way the management is able to understand them, as these will form the basis for decision making. Recommendations have to be numbered starting with the most important first.

BOX 19: DISTINTION BETWEEN CONCLUSSIONS, RECOMMENDATIONS AND LESSONS LEARNED

Evaluation reports must distinguish clearly between findings, conclusions and recommendations. The evaluation presents conclusions, recommendations and lessons learned separately and with a clear logical distinction between them. Conclusions are substantiated by findings and analysis. Recommendations and lessons learned follow logically from the conclusions.

(DAC Evaluation Quality Standard)

20 OECD/DAC (2002). Glossary of key terms in evaluation and results based management.

Page 62: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

3.6. Drafting inception report

198. The evaluation team should prepare a detailed operational plan, i.e. the inception report. This report should include the evaluation issues and questions which will be addressed. It should also include the data required to answer the evaluation questions which by this stage have been refined and finalized, the sources of information i.e. where the information will derive, the methodology to be used, including identifying and designing the data collection instruments and techniques, when the data collection is going to happen, and who will be the target groups, etc. 199. The report should also include the organization of the evaluation activities, e.g. budgeting, schedule and travel. Should external consultants be used for the independent evaluation; the inception report written by them should be submitted to the Evaluation Section for discussion and approval. It is critical to assess the soundness of the methodology proposed by the evaluators, as the production of reliable data that allows for valid evaluative thinking is useful for learning, decision-making and accountability. 200. During the preparatory and inception phases, it is critical to consider how the evaluation results will be disseminated. It is therefore the responsibility of the Evaluation Section to develop a clear dissemination strategy. Some thoughts on this topic are provided under the dissemination sub-section.

4. STEP FOUR: REPORT PREPARATION

201. Once the analysis process is finished, the report should be drafted according to the reporting structure advised in the ToR or agreed between the evaluation team. Agreeing on the structure of the report or drafting the table of contents during the initial period (inception phase) could be helpful for the evaluation team. This may help in organizing the information and focusing the analysis according to an agreed structure, but also it can assist when distributing responsibilities among the evaluation team members. 202. The draft reports of the independent evaluations carried out in the framework of the Evaluation Plan will be subject to consultation with program and project managers and their comments will be duly reflected in the report.

Figure 20: Report Preparation 20

10 J

ulia

Flo

res

Page 63: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

203. The evaluation team leader has the ultimate responsibility of submitting the draft report to the IAOD Evaluation Section, and the latter will ensure that the draft report is distributed for wider consultation to key stakeholders within the organization for their comments. The Evaluation Section assesses the quality of the report and makes its comments. It also coordinates the feedback process: i.e. the comments made by the various stakeholders. 204. In some situations, a workshop or a meeting can be organized, depending on the available budget and time for the evaluation exercise, to invite the key users of the evaluation findings to a debriefing session by the IAOD Evaluation Section. This could be an opportunity for key stakeholders to provide their comments but also to clarify some aspects, which the evaluation team might not have picked up during the analysis phase. This could therefore be an opportunity to refine the findings, conclusions and recommendations. 205. The final report should be prepared addressing as much as possible the feedback received from key stakeholders. It is the responsibility of the evaluation team leader to submit this report to the IAOD Evaluation Section, which will in turn have the responsibility to submit it to the Director, IAOD for approval. An example of the requirements for independent evaluation reports have been attached in Annex 6 of the Guidelines. Final evaluation reports will be submitted to the Director General by the Director, IAOD.

5. STEP FIVE: EVALUATION REPORT DISSEMINATION

Figure 21: Dissemination 206. As mentioned throughout this document, the objectives of an independent evaluation are learning and accountability and to facilitate decision-making. It is therefore important that the results of an evaluation be disseminated in particular to those who need them. This implies the users of the evaluation are being identified at the outset, which is one of the requirements of an evaluation. There are different types of stakeholders who may be interested by the evaluation results as mentioned earlier: primary users and audiences (refer to Chapter 2, Section 4, Table 2). 207. Overall, all evaluation reports will be made available to the public in general. Evaluation reports will be published after approval of the Director, IAOD, on the WIPO website.

2010

Jul

ia F

lore

s

Page 64: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Dissemination of the evaluation report should not only be initiated by the Evaluation Section but should be also be supported by the key users of the evaluation. In addition to the above, an Annual Evaluation Report on the implementation of the Evaluation Plan will be prepared and will summarize all evaluation activities, lessons learned, and the progress on the implementation of agreed evaluation recommendations. The Annual Evaluation Report will be submitted to the Director General and presented to the WIPO General Assembly.

208. The Evaluation Section will produce an evaluation summary for each evaluation. This will provide an overview of the main evaluation conclusions and recommendations, and “Insights” that contain one learning theme from the evaluation to stimulate discussion among practitioners and other development specialists on some important issues.

209. As dissemination is crucial, a clear strategy needs to be prepared which outlines those matters which need to be developed better. The various users and audiences which are interested in the evaluation results need to be considered. The strategy should also identify the different dissemination modes, roles and responsibilities of various bodies within WIPO. This should be part of the evaluation planning, as previously mentioned. The Evaluation Section designs the dissemination strategy and identifies those that should support the dissemination exercise. 210. There are few factors which need to be considered in terms of whom to disseminate or to communicate evaluation results, and in what format, as follows:

If the primary users are Program Managers, the Evaluation Section needs to discuss with them whether the results of the evaluation as presented in the evaluation report satisfy their need in terms of quality and clarity. Without clarity of what the findings and recommendations are, it would be difficult for Program Managers and Directors to make management responses and develop action plans to address the issues raised in the evaluation report. In principle, Managers should have taken part at different stages of the evaluation process, for instance in the preparation of the ToRs, the feedback process, presentation of draft report and other stages. The Evaluation Section will ensure that the recommendations upon which the decision makers will rely for improving their activities are presented clearly. This will be followed up through a “Management Response Tool”21 in order to follow up on the implementation of the recommendations resulting from the evaluation exercises.

If the primary users are the Senior Managers or Member States for policy setting and

allocation of resources, then the users will be more diverse and dissemination of evaluation results may become more complex. What will be required in this instance would be a short brief which summarizes the evaluation results.

If the intention is to inform member states generally on the results of an activity, the task

of disseminating the evaluation results could go beyond the role of the IAOD Evaluation Section; i.e. through preparation of annual reports, the department which deals with Communication may be in charge of this in consultation with the IAOD Evaluation Section.

If the target audiences are people who are generally interested by the results but are not

the primary users, these categories need to be defined. If they are program managers or other WIPO staff, depending on the level of need and their knowledge on the subject, the results evaluation reports will be disseminated through the intranet and internet. The

21 The Evaluation Section recommendations made in the reports will be included in the Database of WIPO Oversight recommendations and followed up by the Director General, the Audit Committee and the Director of IAOD as needed.

Page 65: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

same applies for dissemination to external audiences. Furthermore, the Evaluation Section would develop brochures, articles, public reports summarizing the results of the evaluation report.

6. STEP SIX: EVALUATION IN USE

211. According to Bamberger M (2006), evaluations reports are underutilized due to a number of factors. For example, not enough budget or time is invested in the evaluation and this limits the number of key stakeholders that need to be consulted, reducing the ownership over the evaluation report. Evaluations also become underutilized when the methodology is weak or wrong questions were defined producing irrelevant findings. There are also several challenges that limit the utilization of evaluation results. To mention a few, implementers of an activity may lack time because they are understaffed and do not have the time to attend briefing meetings on the evaluation or even to read the report. Budget is not available to bring all key stakeholders together for briefing meetings and so on. There are also political constraints that could limit the utilization for instance if stakeholders agree with the findings, there are rarely any questions about the methodology. However, if the findings are negative, stakeholders may claim that the evaluation methodology was unscientific which can be a convenient excuse for ignoring findings. These constraints may also affect the quality of the evaluation and consequently its credibility. Therefore it is very important to encourage the active involvement of key evaluation users right from the start to ensure that they understand and accept the strengths and limitations of the proposed methodology. Figure 22: Evaluation in Use 212. In order to increase the utilization of evaluation reports, the Evaluation Section makes the effort whenever possible to involve key users of the evaluation reports in various stages of the evaluation process. Furthermore, the Section will follow a “Real World Evaluation” approach and will take into consideration the recommendations of Bamberger M. (2006):

Summative evaluations will be combined with formative evaluations so that evaluation users see constant feedback and benefits from evaluations.

2010

Jul

ia F

lore

s

Page 66: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Meta-evaluations are planned to assess evaluation processes and results, and share

knowledge with wider WIPO programs.

Building evaluation capacity of key stakeholders, whenever possible, to build a common understanding of independent evaluations. Evaluation capacity building is a process that may continue over many years and it can become a continuous exercise since in many organizations staff fluctuation rates can be very high.

All evaluations are accompanied by a communication strategy to assure that findings and

recommendations will reach the right audiences.

Develop a follow up action plan on the evaluation recommendations.

All independent evaluations will have a management response and are subject to follow-up and reporting. All queries on these procedures will be addressed to the IAOD Evaluation Section. Management response will be brief; it should comment on the clarity, usefulness and utility of recommendations and illustrate Management’s position on the evaluation. The Evaluation Section recommendations made in the reports will be included in the database of WIPO Oversight recommendations and followed up by the Director General, the Audit Committee and the Director, IAOD as needed. The Evaluation Section will specifically follow up in subsequent related evaluations and in the Evaluation Section Annual Reports as necessary.

Page 67: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEXES 

…………………………………………………………………………….  

IAOD Evaluation Section 

Page 68: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 1: REFERENCES ALNAP 2006 Evaluating humanitarian action using the OECD-DAC criteria. An ALNAP guide

for humanitarian agencies. Overseas Development Institute. London. UK Bamberger M., Rugh J.and Mabry L. 2006 RealWorld Evaluation: working under budget, time, data, and political constraints.

Sage Publications, Inc. USA Blackman R. 2003 Roots 5 – Project Cycle Management. Tearfund. Teddington. UK.

ISBN1904364217 CARE International 1997 Design Workshop Report. Danida 2006 Evaluation Guidelines. Evaluation Department. Ministry of Foreign Affairs. DFID 2005 Guidance on Evaluation and Review for DFID Staff. Evaluation Department.

London. UK DFID 2009 Practice Paper: Guidance on Using the Revised Logical Framework. London. UK FAO 2007 Auto-Evaluation Guidelines. Evaluation Service. Rome. Italy Henweg K. and Steiner K. 2002 Impact Monitoring and Assessment – Instruments for use in rural development

projects with a focus on sustainable land management. Volume 1 – Procedure. Germany.

IDRC 2004 Evaluation Guidelines. Paper 7 - Identifying the Intended User(s) of an Evaluation http://www.idrc.ca/en/ev-32492-201-1-DO_TOPIC.html IFAD 2002 Managing for Impact in Rural Development. A Guide for Project M&E. Rome.

Italy ILO 2008 Results based management in the ILO: A guidebook”. Geneva. Switzerland JIU 1978 Glossary of Evaluation Term. General Assembly of the United Nations

(JIU/REP/78/5). JIU 1997 Report of the Joint Inspection Unit General Assembly Official Records 51

Session. Annex 1 – JIU Standards and Guidelines A/51/34/Annex 1. ISSN 0255-1969. New York. USA

Page 69: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Lusthaus C., Adrien M., Anderson C. and Carden F. 1999 Enhancing Organizational Performance: A toolbox for Self-Assessment.

International Development Research Centre (IDRC). Canada. ISBN 0-88936-870-8

National Council for Voluntary Organisations (NCVO) 2003 Measuring Impact – A Guide to Resources. www.NCVO-vol.org.uk Nokes S. 2007 The Definitive Guide to Project Management. 2nd Edition. London (Financial

Times / Prentice Hall). ISBN 978 0 273 71097 4 OIOS

Program Performance Assessment in Results - based Management training program. http://www.un.org/depts/oios/mecd/un_pparbm/p007.htm

OECD/DAC 1991 Principles for Evaluation of Development Assistance. Paris, France. OECD/ DAC 2002 Glossary of Key Terms in Evaluation and Results Based Management. Paris.

France Patton M. Q. 2002 Qualitative Research & Evaluation Methods. 3rd Edition. Sage Publications.

Thousand Oaks. London. New Delhi Patton M. Q. 1997 Utilization-Focused Evaluation. 3rd ed. Sage. Thousands Oaks, CA. Picciotto R. 2008 Evaluation Independence at DFID. Independent Advisory Committee for

Development Impact (IADCI). DFID. London. UK Rossi P., Freeman H. and Lipsey M. 1998 Evaluation A Systematic Approach. Sixth Edition. Sage Publications, Inc. USA Scriven M. 1991 Evaluation Thesaurus. 4th Edition. Sage Publications. Newbury Park. London and

New Delhi. The Institute of Internal Auditors

http://www.iia.org.uk/en/about_us/What_is_internal_audit.cfm UN Secretariat 2000 Regulations and Rules Governing Program Planning, the Program Aspects of the

Budget, the Monitoring of Implementation and the Methods of Evaluation. Secretary-General’s bulletin. ST/SGB/2000/8

UNDP Evaluation Office 2002 Handbook on Monitoring and Evaluating for Results. Evaluation Office. New

York. USA

Page 70: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

UNEG 2008 UNEG Code of Conduct for Evaluation in the UN System. Foundation Document.

UNEG/FN/CoC(2008) UNEG 2008 UNEG Ethical Guidelines for Evaluation. Foundation Document.

UNEG/FN/ETH(2008) UNEG 2007 Reference Document on Oversight and Evaluation in the UN System. Document

prepared by the UNEG Evaluation and Oversight Working Group for presentation and discussion at the UNEG Annual General Meeting.

UNEG 2008 Issues Paper – Distinctiveness of Evaluation, Document prepared by the

Distinctiveness of Evaluation Function (DEFT) Task Force for presentation at the UNEG AGM, 2 – 4 April 2008. Geneva, Switzerland.

UNEG 2005 Foundation Document. Standards for Evaluation in the UN System.

UNEG/FN/Standards(2005) UNEG 2005 Foundation Document. Norms for Evaluation in the UN System.

UNEG/FN/Norms(2005) UNEG 2007 Reference Document: Evaluation in the UN System. UNEG/REF(2007)3 UNEG 2007 Institutional Arrangements for Governance Oversight and Evaluation in the UN.

UNEG/REF(2007)4. UNICEF 2006 Evaluation Working Papers – Issue # 5: New Trends in Development Evaluation.

Unicef Regional Office for CEE/CIS and IPEN. UNICEF 2005 Monitoring and Evaluation, Program Policy and Procedure Manual. Page 7 WIPO 2007 Evaluation Policy. Geneva. Switzerland WIPO 2009 WIPO Self-Evaluation Guidelines. IAOD Evaluation Section. Geneva.

Switzerland. WIPO 2009 2010-2015 Evaluation Strategy. IAOD Evaluation Section. Geneva Switzerland WIPO 2009 2010-2011 Biennial Evaluation Plan. IAOD Evaluation Section. Geneva

Switzerland

Page 71: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 2: GUIDELINES GLOSSARY 213. Most of the terms appearing in the Guidelines are used in the development community and also by evaluation practitioners. In this publication, all terms are defined in the context of monitoring and evaluation, although they may have additional meanings in other contexts. 214. The glossary is not intended to provide a list of all existing definitions used in relation to evaluations, it is only provided to assist the users of the guidelines with clear definitions of the terms used in the Guidelines. All definitions available in the Guidelines have been extracted and partially adapted from the following sources:

OECD-DAC, http://www.oecd.org/dac/htm/glossary.htm IFAD, http://www.ifad.org/evaluation/guide/index.htm UNDP Evaluation Office, http://www.undp.org/evaluation/documents/HandBook/ME-

HandBook.pdf OIOS, http://www.un.org/depts/oios/mecd/un_pparbm/p002.htm JIU, http://www.unjiu.org/data/en/guidelines/guidelines_en.pdf ALNAP, http://www.alnap.org/pool/files/eha_2006.pdf DFID, https://www.dfid.gov.uk/Documents/publications/evaluation/guidance-

evaluation.pdf The Institute of Internal Auditors,

http://www.iia.org.uk/en/about_us/What_is_internal_audit.cfm

A Accountability Relates to the obligations of organizations to act accordingly to clearly defined responsibilities, roles, performance expectations, justification of expenditures, decisions or results of the discharge of authority and official duties, including duties delegated to a subordinate unit or individual. In regard to program and project managers, the responsibility and obligation to provide a true and fair view of performance and the results of operations, as well as evidence to stakeholders that a program or project is effective and conforms with planned results, legal and fiscal requirements. In organizations that promote learning, accountability may also be measured by the extent to which managers use and ensure credible monitoring, evaluation findings and reporting. For public sector managers and policy-makers, accountability is to taxpayers/citizens. For WIPO, accountability is to its Member States. Activities Actions taken or work performed in an activity to produce specific outputs by using inputs such as funds, technical assistance and other types of resources. Appraisal An overall assessment of the relevance, feasibility and potential sustainability of a development activity prior to a decision of funding. In international development terms, appraisal means a critical assessment of the potential value of an activity before a decision is made to implement it in order to decide whether the activity represents an appropriate use of the Organization’s resources. Attribution This is the causal link between observed (or expected) changes and a specific activity. It is related to the change and effect which is due to the activity, but also due to difficulties in

Page 72: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

‘boundary judgment’, i.e. deciding what effects to select for consideration22, as effects can be numerous and varied. Attribution gap According to Herber and Steiner (2002), “the impact chain (utilization, effect, benefit / drawback, impact) needs time to develop, time during which the number of factors and actors as well as their interactions increases. This makes it more and more difficult to attribute a change to a single factor or program/ project. This is called the "attribution gap". Even with costly investigations, a program/project can only narrow, but will not close this gap. Audit Internal auditing is an “independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes”.23 The purpose, authority, and responsibility of the internal audit activity are formally defined in the internal audit charter, consistent with the Definition of Internal Auditing, the Code of Ethics, and the Standards promulgated by the Institute of Internal Auditors (IIA). B Baseline data Data that describes the situation to be addressed by an activity and that serves as the starting point for measuring the performance of that activity. A baseline study would be the analysis describing the situation prior to receiving assistance. This is used to determine the results and accomplishments of an activity and serves as an important reference for evaluation. Benchmark Reference point or standard against which progress or achievements may be compared, e.g., what has been achieved in the past, what other comparable organizations such as development partners are achieving, what was targeted or budgeted for, what could reasonably have been achieved under the circumstances. It also refers to an intermediate target to measure progress in a given period. Beneficiaries Individuals and/or institutions whose situation is supposed to improve (the target group), and others whose situation may improve. Also refers to a limited group among the stakeholders who will directly or indirectly benefit from the project. Best practices Planning and/or operational practices that have proven successful in particular circumstances. Best practices are used to demonstrate what works and what does not and to accumulate and apply knowledge about how and why they work in different situations and contexts. Bias Refers to statistical bias. Bias is an inaccurate representation that produces systematic error in a research finding. Bias may result in overestimating or underestimating characteristics or trends. It may result from incomplete information or invalid data collection methods and may be intentional or unintentional.

C 22. Ministry of Foreign Affairs of Denmark, Danida (2006). Evaluation Guidelines. Denmark 23 Source: The Institute of Internal Auditors. http://www.iia.org.uk/en/about_us/What_is_internal_audit.cfm

Page 73: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Coherence The needs to assess and to ensure that there is consistency within an organization’s activities and policies, as well as between organization’s policies and that of national and international policies. Conclusion A reasoned judgment based on a synthesis of empirical findings or factual statements corresponding to a specific circumstance. Example: The research and development program of the Agricultural Science and Technology Institute is strong in its technical aspects but weak in its linkage with target groups. Co-ordination Coordination is “the systematic use of policy instruments to deliver support in a cohesive and effective manner. Such instruments include strategic planning, gathering data and managing information, mobilizing resources and ensuring accountability, orchestring a functional division of labor, negotiating and maintaining a serviceable framework with host political authorities and providing leadership” (Minear et al, 1992). Cost-effectiveness The relation between the costs (inputs) and results produced by an activity. An activity is more cost-effective when it achieves its results at the lowest possible cost compared with alternative projects with the same intended results. Country level evaluations These types of evaluations provide an assessment of the performance and impact of an organization’s supported activities in countries with a large portfolio. Coverage Coverage involves determining who was supported by the activity, and why. In determining why certain groups were covered or not, coverage is linked closely to effectiveness and often refers to numbers or percentages of the population to be covered by the activity.

D Data Specific quantitative and qualitative information or facts that are collected. Development effectiveness The extent to which an institution or activity has brought about targeted change in a country or the life of an individual beneficiary. Development effectiveness is influenced by various factors, beginning with the quality of the project design and ending with the relevance and sustainability of desired results.

E Effectiveness This is used to measure the extent to which the activities expected results or specific intermediate objectives have been achieved or are expected to be achieved. An activity is considered as effective when its outputs (services or products) produce the desired objectives and expected results. Assessing the effectiveness involves an analysis of the extent to which stated activity objectives are met. Efficiency

Page 74: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Efficiency measures how inputs (i.e. expertise, time, budget, etc.) are converted into results; it expresses the relationship between outputs (services produced by an activity) and inputs (the resources put in place). An activity is considered to be efficient when it uses the least costly resources, but appropriate in order to achieve the desired outputs. In general assessing efficiency requires to compare alternative approaches which can achieve same outputs. As in the case of effectiveness, it might be easier to assess efficiency of less complex activities than for others. Evaluation team Group of specialists responsible for the detailed planning and conduct of an evaluation. An evaluation team writes the evaluation report. Evaluator An individual involved in all stages of the evaluation process, from defining the terms of reference and collecting and analyzing data to making recommendations and taking corrective action or making improvements. Evaluability assessment Technical part of the pre-evaluation, which takes stock of available knowledge and assesses whether technical and institutional conditions are sufficient for reliable and credible answers to be given to the questions asked. Concretely, it consists of checking whether an evaluator using appropriate evaluation methods and techniques will be capable, in the time allowed and at a cost compatible with existing constraints, to answer evaluative questions with a strong probability of reaching useful conclusions. In some formulations it also includes an assessment of the likelihood of evaluation outputs being used. Ex-ante Evaluation (At the design stage of an activity) A process that supports the identification and design of an activity, initiative, project, program, strategy, policy, unit, organization or sector. Its purpose is to gather information and carry out analyses that help to define objectives, baselines and to ensure that these objectives can be met, that the instruments used are cost-effective and that reliable later independent evaluation will be possible. Experimental Model Laboratory model consisting of creating two groups that are equivalent to each other. One group (the program or treatment group) gets the program and the other group (comparison control group) does not. In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on. When differences are observed in the results between these two groups, then the differences must be due to the only thing that differs between them – that one got the program/treatment and the other did not. Ex-post evaluation (At the end of an activity) Undertaken at the end of an activity. Ex-post evaluations like the other two types of evaluation focus on international evaluation criteria of relevance, effectiveness, efficiency and sustainability but they also focus on impact, coherence, coordination and coverage. Impact evaluations report on the development results achieved and focuses on the intended and unintended, positive and negative outcomes and impacts. External Factors According to OIOS, External factors should be considered at both the planning and evaluation stages. The extent of their effect should be assessed when an expected accomplishment is not fully realized. External factors should not be cited as an “excuse” for non-performance.

F

Page 75: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Feedback As a process, feedback consists of the organization and packaging in an appropriate form of relevant information from monitoring and evaluation activities, the dissemination of that information to target users and, most importantly, the use of the information as a basis for decision-making and the promotion of learning in an organization. Feedback as a product refers to information that is generated through monitoring and evaluation and transmitted to parties for whom it is relevant and useful. It may include findings, conclusions, recommendations and lessons from experience. Finding Factual statement about the activity based on empirical evidence gathered through monitoring and evaluation activities. Formative Evaluation An evaluation intended to furnish information for guiding program improvement is called formative evaluation (Scriven 1991).

G Goal (Impact Level) The higher order objective to which a development activity is intended to contribute. Results of an activity are generally meant to contribute to the wider development objective or a goal. The goal or objective reflects the situation observed at the end of the activity and it is not intended to be achieved solely by the activity. This is a higher-level situation that the activity will contribute towards. Well-defined objectives have the following characteristics: they are time limited; they describe the situation that should be observed at the end of the time period; they specify an observable end-state i.e. they are neither vague nor a projection of activities; they achievement can be planned. Badly defined objectives are essentially meaningless.

I Impact Impact measures the effects of an activity; these effects or changes could be positive or negative, intended or unintended, on the target groups of the activity. Impact is a broader consequence of an activity at social, economic, political, technical or environmental level. Impact examines the longer-term consequences of achieving or not achieving those objectives, and the issue of wider socioeconomic change. Independent evaluation An evaluation carried out by persons separate from those responsible for managing, making decisions on, or implementing the activity. The credibility of an evaluation depends in part on how independently it has been carried out, i.e., on the extent of autonomy and the ability to access information, carry out investigations and report findings free of political influence or organizational pressure. Indicators

Page 76: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an activity, or to help assess the performance of a development actor. Indicators at the goal level are called impact indicators, at the purpose level are called outcome indicators and at the output level are output indicators. Input A means mobilized for the conduct of program or project activities, i.e., financial, human and physical resources. Inputs These are generally the resources, in terms of finance, manpower and material used to undertake the activities of a project. Inspection According to the UN Joint Inspection Unit (JIU) definition, “an inspection is an independent, on-site review of the operations of organizational units to determine the extent to which they are performing as expected. An inspection examines the functioning of processes or activities to verify their effectiveness and efficiency. An inspection compares processes, activities, projects, and programs to established criteria (e.g., applicable rules and regulations, internal administrative instructions, good operational practices of other units within or outside the organization concerned), and does so in view of the resources allocated to them”. Investigation “Investigation is a legal inquiry into the conduct of, or action taken by, an individual or group of individuals or a situation or occurrence resulting from accident or force of nature. An investigation pursues reports of alleged violations of rules and regulations and other establishes procedures, mismanagement, misconduct, waste of resources or abuse of authority with a view to proposing corrective management and administrative measures, as appropriate, bringing the matter to the attention of suitable legal authorities and/or internal offices of investigation. An investigation compares the subject under investigation to established criteria (e.g., rules and regulations, codes of conduct, administrative instructions and applicable law)”. JIU (1978).

L Lesson learned Learning from experience that is applicable to a generic situation rather than to a specific circumstance. Logical Framework Approach (LFA) A methodology that logically relates the main elements in program and project design and helps ensure that the activity is likely to achieve measurable results. The methodology is mainly used in the design, monitoring and evaluation of activities. As part of this methodology a logical framework matrix is developed. Logical framework matrix A project/program would have an internal logic between the various hierarchy of objectives or the cause and effect relationship between the various elements: i.e. ensure the consistency among impact, outcomes, outputs, activities and inputs, as well as indicators, targets, milestones, baselines, assumptions and risks. The approach helps to identify strategic elements (inputs, outputs, purposes and goal) of a program, their causal relationships, and the external factors that may influence success or failure of the program.

Page 77: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

M Meta-evaluation The term is used for evaluations designed to aggregate findings from a series of evaluations. It can also be used to denote the evaluation of an evaluation to judge its quality and/or assess the performance of the evaluators. Mid-term Evaluations (During implementation of an activity) They are usually undertaken at around the mid-point of the implementation of an activity, project, program, strategy or policy. They measure and report on performance to date and indicate adjustments that may need to be made to ensure the successful implementation of the activity, project, program, strategy or policy. These adjustments may include modifying the results-based framework. Mid-term evaluation is useful to keep data collection minimized and prioritized on information that informs decision-making and learning. Monitoring A continuous function undertaken by program and project staff during the implementation of an activity. Monitoring aims primarily to provide managers and main stakeholders with regular feedback and early indications of progress or lack thereof in the achievement of intended results. Monitoring tracks the actual performance or situation against what was planned or expected according to pre-determined standards. Monitoring generally involves collecting and analyzing data on implementation processes, strategies and results, and recommending corrective measures. The UNEG in its paper on “Distinctiveness of Evaluation” has highlighted the key differences between monitoring and evaluation (see Table 1):

O Organizational Assessments These are aimed at understanding and improving performance looking at four key pillars: Effectiveness, Efficiency, Financial Sustainability and Relevance. Organizational assessments can be used as a diagnostic tool for organizations implementing an internal change or strategic planning process, or both. Organizational assessment goes beyond measuring the results of an organization’s programs, products and services. (Lusthaus C., Adrien M., Anderson C. and Carden F. 1999) Outcome Actual or intended change in development conditions the activities are seeking to support. It describes a change in development conditions between the completion of outputs and the achievement of impact. Outputs Tangible products (including services) of a program or project, that are necessary to achieve the objectives if a program or project. Outputs relate to the completion (rather than the conduct) of activities and are the type of results over which managers have a high degree of influence. Outputs are generally the deliverables of a project that project managers are expected to deliver for which they are accountable. Oversight Oversight refers to a key activity of governance and management of an organization, which ensures that an organization and its component units perform in compliance with legislative mandates and policy, with full accountability for its finances, as well as for the efficiency, effectiveness and impact of its work, with adherence to standards of professionalism, integrity and ethics, while adequately managing and minimizing risk. Like in many UN Organizations, the WIPO independent evaluation function has been positioned under the Internal Audit and

Page 78: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Oversight Division (IAOD) in order to unify the four important oversight functions of Evaluation, Audit, Inspection and Investigation.

P Performance The degree to which an activity operates according to specific criteria/standards/ guidelines or achieves results in accordance with stated goals or plans. Performance assessment Independent assessment or self-assessment by program, comprising outcome, program, project or individual monitoring, reviews, end-of-year reporting, end-of-project reporting, institutional assessments and/or special studies. The Program Performance Report (PPR) conducted by WIPO on a biennial basis is an example of a self-assessment performance exercise. Performance indicator A particular characteristic or dimension used to measure intended changes defined by an organizational unit’s results framework. Performance indicators are used to observe progress and to measure actual results compared to expected results. They serve to answer “how” or “whether” a unit is progressing towards its objectives, rather than “why” or “why not” such progress is being made. Performance indicators are usually expressed in quantifiable terms, and should be objective and measurable (e.g., numeric values, percentages, scores, and indices). Performance management The generation of management demand for performance information and its use and application for continuous improvement. It includes “performance measurement”. Performance measurement The collection, interpretation of, and reporting on data for performance indicators which measure how well programs or projects deliver outputs and contribute to achievement of higher level aims (purposes and goals). Performance measures are most useful when used for comparisons over time or among units performing similar work. It is a system for assessing performance of development initiatives against stated goals. Also described as the process of objectively measuring how well an agency is meeting its stated goals or objectives. Pre-post design (before and after) A control design in which only one or few before-activity and after-activity measures are taken. Changes identified between before and after cannot be attributed to the program. Primary data Information evaluators can observe or collect directly from an organization’s stakeholders about their first-hand experience with the activity. This data is collected through the use of surveys, meetings, focus groups discussions, interviews or other methods that involve direct contact with the respondents. It can facilitate deeper understanding of observed changes and the factors that contribute to change. Program The Program Management Institute of the UK describes a program as a group of related projects managed in a coordinated way to obtain benefits and control not available from managing them individually. Programs may include elements of related work outside the scope of the discrete projects in a program. Within WIPO a program could be a group of activities, projects or services with a defined date that are intended to deliver specific outputs and outcomes that are meant to have contribute to a higher objective or strategic level goal.

Page 79: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Program evaluations In depth evaluations of an organization’s programs. Overall, program level evaluations assess the efficiency and effectiveness of an activity or set of activities in achieving the intended results. They also assess the relevance and sustainability of results as contributions to medium-term and longer-term results. Program Management Program Management is the process of managing several related projects, often with the intention of improving an organization's performance. Program management also emphasizes the coordinating and prioritizing of resources across projects, managing links between the projects and the overall costs and risks of the program in order to achieve the most effective and efficient way to deliver the desired outcomes and impacts. The program manager has to manage the negotiations between stakeholders while balancing all stakeholders’ interests at a level that is typically far wider than a project manager meets. Program managers do have the responsibility of managing project managers. Project According to Nokes (2007) “a project is a temporary endeavor, having a defined beginning and end, undertaken to meet unique goals and objectives”, usually to bring about beneficial change or added value. Project-level evaluations This involves evaluation of an activity designed to achieve specific objectives within specified resources and implementation schedules; the project could be part of a broader program. The different types of project-level evaluations share the purpose of assessing implementation achievement, impact and sustainability, thus contributing to learning and ultimately to the improvement of project impact and performance. Project Management The process of leading, planning, organizing, staffing and controlling activities, people and resources in order to achieve particular program objective and outputs. Purpose The positive improved situation that an activity is accountable for achieving. Indicators at the purpose level should be “outcome” measures. As with the goal, indicators should only state what will be measured – i.e. they should not include elements of the baseline or target. Proxy measure or indicator A variable used to stand in for one that is difficult to measure directly.

Q Quasi-Experimental Model Similar to the experimental model but no randomization. Comparisons are made between targets who participate in the program and non-participants who are presumed similar to participants in critical ways. These techniques are called quasi-experimental because, although they use “experimental” and “control” groups, they lack randomization24.

24 Rossi P., Freeman H. and Lipsey M. Evaluation a Systematic Approach – 6th Edition. (1999). Sage Publications. London. New Delhi

Page 80: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

R Recommendation Proposal for action to be taken in a specific circumstance, including the parties responsible for that action. Relevance Relevance is concerned with assessing whether the activity is in line with local needs and priorities, as well as an organization’s mandate. Relevance is a question of usefulness or pertinence to the needs of those the program is geared to. The assessment of relevance leads to decisions whether the activity ought to continue or to be terminated. Relevance can be measure at various levels: Organizational, stakeholders and program relevance. Relevance is also linked to the appropriateness of the activity. Research A systematic examination designed to develop or contribute to knowledge. Results A broad term used to refer to the effects of an activity. The terms “outputs”, “outcomes” and “impact” describe more precisely the different types of results at different levels of the logframe hierarchy. Results-Based Management (RBM): A management strategy or approach by which an organization ensures that its processes, products and services contribute to the achievement of clearly stated results. RBM provides a coherent framework for strategic planning and management. It is also a broad management strategy aimed at achieving important changes in the way agencies operate, with improving performance and achieving results as the central orientation, by defining realistic expected results, monitoring progress towards the achievement of expected results, integrating lessons learned into management decisions and reporting on performance. The UNEG has in his paper on “Distinctiveness of Evaluation” established the difference between RBM and Evaluation, this have been presented in this guidelines on Chapter 2, Section 5, Table 3. Results Chain The causal sequence for a development activity that stipulates the necessary sequence to achieve desired objectives beginning with inputs, moving through activities and outputs, and culminating in outcomes, impacts, and feedback. In some agencies, reach is part of the results chain. Results framework The program logic that explains how the development objective is to be achieved, including causal relationships and underlying assumptions. Review An assessment of the performance of an activity, periodically or on an ad hoc basis. Evaluation is used for a more comprehensive and more in-depth assessment than “review”. Reviews tend to emphasize operational aspects. 25 Risk analysis An analysis or an assessment of factors (called assumptions in the logframe) affect or are likely to affect the successful achievement of an activity’s objectives. A detailed examination of the potential unwanted and negative consequences to human life, health, property, or the

25 Adapted from OECD/ DAC (2002). Glossary of Key Terms in Evaluation and Results Based Management. Paris. France.

Page 81: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

environment posed by development activities; a systematic process to provide information regarding such undesirable consequences; the process of quantification of the probabilities and expected impacts for identified risks.

S Secondary data Sources such as periodic progress reports, annual reports, memos, sectoral studies and baseline data. They serve as background and foundation material and resources for an evaluation. Self-evaluation An evaluation by those who are entrusted with the design and delivery of an activity i.e. an evaluation that is:

carried out by program managers and implementers themselves; carried out by program managers and implementers with the support of external

evaluators; or financed by the programs themselves and undertaken solely by external experts.

Stakeholders are: People affected by the impact of an activity (activity, project, program, strategy, policy, etc.), as well as, people who can influence the impact of an activity. Strategic evaluations Strategic evaluations analyze the organization’s contribution to critical areas for greater effectiveness and impact. They provide knowledge on policy issues, programmatic approaches, cooperation modalities, etc . These evaluations may also assess the organization’s contribution in the achievement of the strategic results to which the organization is accountable. Summative Evaluations Are those carried out after implementation to asses the effects and impact of an activity – intended or unintended – it can also assess the cost-effectiveness and cost-benefit analysis of the activity. Sustainability This is concerned with measuring whether the Organization’s benefits are likely to continue after its funding has been withdrawn. This is an assessment of whether the activity is likely to be used in the future, and will be maintained. Survey Systematic collection of information from a defined population, usually by means of interviews or questionnaires administered to a sample of units in the population (e.g., person, beneficiaries, and adults).

T Target Particular level of outcome that an organization or activity aims to achieve. Target groups The main beneficiaries of a program or project that are expected to gain from the results of that program or project; sectors of the population that a program or project aims to reach in order to address their needs based on gender considerations and their socio-economic characteristics. Terms of reference

Page 82: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Definition of the work and the schedule that must be carried out by the evaluation team. The terms of reference (ToR) recalls the background and specifies the scope of the evaluation, states the main motives for an evaluation and the questions asked. It sums up available knowledge and outlines an evaluation method and describes the distribution of work, schedule and the responsibilities among the people participating in an evaluation process. It specifies the qualifications required from candidate teams or individuals as well as the criteria to be used to select an evaluation team. Thematic evaluations Thematic evaluations are designed to assess the effectiveness of its processes and approaches and to contribute to increasing the organization’s knowledge on selected issues and subjects. It involves the “evaluation of a selection of activities, all of which address a specific development priority that cuts across countries, regions, and sectors” Triangulation The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment.

W Work plan Annual or multi-year summary of tasks, timeframes and responsibilities. It is used as a monitoring tool to ensure the production of outputs and progress towards outcomes.

Page 83: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 3: EVALUATION SECTION CODE OF CONDUCT FOR EVALUATION CONSULTANTS Adapted by the Evaluation Section from the UNEG Code of Conduct and Ethical Guidelines for Evaluation. To promote the trust and confidence in evaluation within WIPO, evaluation consultants working for the Evaluation Section are required to commit themselves in writing to the Code of Conduct for Evaluation specifically to the following obligations: Independence Evaluation in the United Nations systems should be demonstrably free of bias. To this end, evaluation consultants managed by the Evaluation Section are recruited for their ability to exercise independent judgment. Evaluation consultants working for the Section shall ensure that they are not unduly influenced by the views or statements of any party, that independence of judgment is maintained and evaluation findings and recommendations are consistent, verified and independently presented. Impartiality Evaluation consultants working for the Evaluation Section shall operate in an impartial and unbiased manner at all stages of the evaluation and give a comprehensive and balanced presentation of strengths and weaknesses of the activity or organizational unit being evaluated, taking due account of the views of a diverse cross-section of stakeholders. Evaluators shall guard against distortion in their reporting caused by their personal views and feelings. Credibility Evaluation consultants working for the Evaluation Section should prepare their reports based on reliable data and observations and ensure that reports show evidence of consistency and dependability in data, findings, judgments and lessons learned; appropriately reflecting the quality of the methodology, procedures and analysis used to collect and interpret data. They shall endeavor to ensure that each evaluation is accurate, relevant, and timely and provides a clear, concise and balanced presentation of the evidence, findings, issues, conclusions and recommendations. Conflicts of Interest Evaluation consultants working for the Evaluation Section shall avoid as far as possible conflict of interest so that the credibility of the evaluation process and product shall not be undermined. Conflicts of interest may arise at the level of the IAOD Evaluation Section, or at that of individual staff members or consultants. Conflicts of interest should be disclosed and dealt with openly and honestly. Evaluation consultants working for the Evaluation Section are required to disclose in writing any past experience, of themselves, their immediate family, close friends or associates, which may give rise to a potential conflict of interest and to deal honestly in resolving any conflict of interest which may arise. Evaluation consultants working for the Evaluation Section shall not have had any responsibility for the design, implementation or supervision of any of the activities that they are evaluating. Honesty and Integrity Evaluation consultants working for the Evaluation Section shall:

a. Accurately represent their level of skills and knowledge and work only within the limits of their professional training and abilities in evaluation, declining assignments for which

Page 84: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

they do not have the skills and experience to successfully complete.

b. Negotiate honestly the costs, tasks to be undertaken, limitations of methodology, scope of results likely to be obtained, and uses of data resulting from the evaluation.

c. Accurately present their procedures, data and findings, including ensuring that the

evaluation findings are not biased to make it more likely that the evaluator receives further commissions from the Client.

Accountability Evaluation consultants working for the Evaluation Section are accountable for the completion of the evaluation as agreed with the Client in the ToRs. Obligations to participants Evaluation consultants working for the Evaluation Section shall:

a. Respect people’s right to provide information in confidence and make participants aware of the scope and limits of confidentiality. Evaluators must ensure that sensitive information cannot be traced to its source so that the relevant individuals are protected from reprisals.

b. Respect differences in culture, local customs, religious beliefs and practices, personal

interaction, gender roles, disability, age and ethnicity, and be mindful of the potential implications of these differences when planning, carrying out and reporting on evaluations, while using evaluation instruments appropriate to the cultural setting

c. Keep disruption to a minimum while needed information is obtained, providing the

maximum notice to individuals or institutions they wish to engage in the evaluation, optimizing demands on their time, and respecting people’s right to privacy.

Rights In including individuals or groups in the evaluation, Evaluation consultants working for the Evaluation Section shall ensure:

a. Right to Self-Determination. Prospective participants should be treated as autonomous agents and must be given the time and information to decide whether or not they wish to participate and be able to make an independent decision without any pressure or fear of penalty for not participating.

b. Fair Representation. Evaluators shall select participants fairly in relation to the aims of

the evaluation, not simply because of their availability, or because it is relatively easy to secure their participation. Care shall be taken to ensure that relatively powerless, ‘hidden’, or otherwise excluded groups are represented.

c. Compliance with codes for vulnerable groups. Where the evaluation involves the

participation of members of vulnerable groups, evaluators must be aware of and comply with legal codes (whether international or national) governing, for example, interviewing children and young people.

d. Redress. Stakeholders receive sufficient information to know a) how to seek redress for

any perceived disadvantage suffered from the evaluation or any projects it covers, and b) how to register a complaint concerning the conduct of an Implementing or Executing Agency.

Page 85: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

Confidentiality Evaluation consultants working for the Evaluation Section shall respect people’s right to provide information in confidence and make participants aware of the scope and limits of confidentiality. Evaluation consultants working for the Evaluation Section must ensure that sensitive information cannot be traced to its source so that the relevant individuals are protected from reprisals. Avoidance of Harm Evaluation consultants working for the Evaluation Section shall seek to: minimize risks to, and burdens on, those participating in the evaluation; and seek to maximize the benefits and reduce any unnecessary harm that might occur from negative or critical evaluation, without compromising the integrity of the evaluation. Accuracy, Completeness and Reliability Evaluation consultants working for the Evaluation Section have an obligation to ensure that evaluation reports and presentations are accurate, complete and reliable. In the evaluation process and in the production of evaluation products, evaluation consultants working for the Evaluation Section shall:

a. Carry out thorough inquiries, systematically employing appropriate methods and techniques to the highest technical standards, validating information using multiple measures and sources to guard against bias, and ensuring errors are corrected.

b. Describe the purposes and content of object of the evaluation (program, activity,

strategy) clearly and accurately.

c. Present openly the values, assumptions, theories, methods, results, and analyses that significantly affect the evaluation, from its initial conceptualization to the eventual use of findings.

d. Examine the context in enough detail so its likely influences can be identified (for

example geographic location, timing, political and social climate, economic conditions).

e. Describe the methodology, procedures and information sources of the evaluation in enough detail so they can be identified and assessed.

f. Make a complete and fair assessment of the object of the evaluation, recording of

strengths and weaknesses so that strengths can be built upon and problem areas addressed.

g. Provide an estimate of the reliability of information gathered and the replicability of

results (i.e. how likely is it that the evaluation repeated in the same way would yield the same result?).

h. Explicitly justify judgments, findings and conclusions and show their underlying rationale

so that stakeholders can assess them.

i. Ensure all recommendations are based on the evaluation findings only, not on their or other parties’ biases.

Transparency Evaluation consultants working for the Evaluation Section shall:

a. Clearly communicate to stakeholders the purpose of the evaluation, the criteria applied and the intended use of findings.

b. Ensure that stakeholders have a say in shaping the evaluation.

Page 86: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

c. Ensure that all documents are readily available to an understood by stakeholders.

Page 87: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 4: EVALUATION SECTION CODE OF CONDUCT - EVALUATION CONSULTANTS AGREEMENT FORM

 

World Intellectual Property Organization

Internal Memorandum

Organisation Mondiale de la Propriété Intellectuelle

Mémorandum Interne

To be signed by all consultants as individuals (not by or on behalf of a consultancy company) before a contract can be issued.

Agreement to abide by the IAOD Code of Conduct for Evaluation in the UN System Name of Consultant: Name of Consultancy Organization (where relevant): I confirm that I have received and understood and will abide by the IAOD Code of Conduct for Evaluation in the UN System. Signed at (place) on (date) Signature:

Page 88: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 5: EXAMPLE OF PRIMARY AND SECONDARY DATA COLLECTION METHODS

TYPES OF METHODS

DESCRIPTION

PRIMARY DATA COLLECTION METHODS

QUALITATIVE – SEMI STRUCTURE

Direct observation This involves observation of sites, practices, physical infrastructure. Observation checklist should be used to ensure that observations are made systematically and that those from different sites are comparable.

Key informant individual interviews

This is used generally to obtain specialist information. A Key informant is anyone with special knowledge of a particular topic. It is best to use a checklist of questions on key topics. The questions have to be open-ended in order to stimulate discussions. The checklist serves as a guide and not as a questionnaire.

Focus groups

This involves interviewing a small group of people who are either knowledgeable about, or interested in a specific topic. As in the case of key informants, it is useful to prepare a checklist of questions to guide the discussion, so that the discussion does not digress too far from the original topic.

Case studies

This entails an in-depth assessment of a very limited number of observations. The criteria for selecting the cases depend on variables which should be set up by the evaluation team depending on what the objectives and the needs of the organization. Case studies could be used within WIPO programs selecting a small number of countries in each geographical area.

Memory recall

This is used to reconstruct the situation of the target groups before a project/program activity. It is particularly used in the absence of baseline information which serves as a reference point to compare with current information during an evaluation. It involves interviewing the target groups/beneficiaries either individually or in groups. As many programs in WIPO may not have baseline information, this method may be very useful to reconstruct data.

Photos/ images This could be land, aerial or satellite pictures showing an infrastructure (e.g. a building constructed), or an event delivered (e.g. a conference)

Historical narration/ most significant change technique

This entails collecting what constitutes the most significant stories by junior staff who generally work with the beneficiaries, and selecting the stories which demonstrate highly the most significant change by a panel of designated, generally senior staff. The approach involves greater participation from the bottom, with regular feedback from the top to the bottom. The changes are captured from the bottom, and most often unintended effects of the project/program are identified using this system. It is highly participative and empowering. This technique helps to identify complementary information which can not be captured by

Page 89: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

standardized data collection methods

QUANTITATIVE STRUCTURE

Surveys

This involves interviewing individuals using a pre-coded or structured questionnaire conducted either through the Web or by enumerators. There are small or large surveys depending on the need of the data for evaluation. Generally large samples allow for more refined analysis and could be more representative, but they could be costly. Sampling methods (i.e. simple random, stratified random and cluster sampling) are used to try to capture the representation of various categories of people who need to be interviewed. Trained specialists are required for the design, planning and data processing and analysis.

SECONDARY DATA COLLECTION METHODS (EXISTING DATA)

Official records and surveys

Those undertaken by government agencies, multilateral institutions, research institutes, or other aid agencies

Project documents and record reviews

This kind of information should be carefully examined as it often contains valuable information already collected at the start and of the project and during the monitoring process.

Literature reviews

Can often provide a useful starting point for understanding the relationship between different variables to be analyzed in a monitoring or evaluation process.

Page 90: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

ANNEX 6: EXAMPLE OF FINAL INDEPENDENT EVALUATION REPORT STRUCTURE

Title Page

The Evaluation Section title page should include the date of publication, the name of the authors responsible for the report and a clear reference to the commissioning Organization. A reference where the document can be found once published should be added as well.

Preface This should be prepared by the IAOD Evaluation Section

Contents page Indicating the sections and annexes for easy reference

Acronyms and abbreviations

These are usually explained in full on the first occasion they are used in the report in order to assist the reader

Acknowledgements Thanking those that have contributed to the exercise

Executive summary Should be as brief as possible and should include a summarized version of the key findings, conclusions and recommendations. The executive summary should not exceed 2 pages.

Main report body

The main report should be presented in the user-friendliest form allowing easy reading and interpretation of data. The evaluation team should make use wherever possible of sophisticated presentation techniques like diagrams, statistics and avoid the use of jargon.

Introduction

Should include the following: The purpose, scope and focus of the evaluation Any limitations of the evaluation design gained in retrospect The policy context of the activity The activity’s size in terms of budget in relation to the overall

WIPO’s budget A brief description of the activity, its logic and assumptions An explanation of the structure of the report Introduction to the team

Methodology (how the evaluation has been undertaken)

Phases in data collection (desk study, field visits) Reasons for the selection of the activity, or the countries or

case studies chosen How information is collected (primary data collection, secondary

data collection) Challenges encountered during the undertaking of the exercise

like for instances key stakeholders not available for participating in the exercise or documentation not reliable or available

Findings

The evaluation team will need to report based on the evidence found through primary and secondary data the following: What happened? and why? What results were achieved in relation to those intended? What was the positive or negative, intended or unintended

impact? What have been the effects on end beneficiaries?

Page 91: WIPO Independant Evaluation Guidelines · WIPO INDEPENDENT EVALUATION GUIDELINES 5 CHAPTER 1 ... HOW EVALUATION WORKS IN THE UN 15. As indicated in the “UNEG Institutional Arrangements

What are the responses related to the evaluation criteria selected as part of the evaluation exercise?

Conclusions (justified and arising logically from the findings)

Summary of achievements against the initial activity logic model Summary of problems encountered and the reasons for this Overall effect end beneficiaries and cross cutting issues Why things happened as they did, questioning the assumptions,

design, implementation, management, etc.

Lessons and knowledge sharing

Lessons that may have implications for future work

Recommendations A short number, succinctly expressed, and addressed to those with the means and responsibilities for implementing them

Appendixes and annexes

Terms of reference Schedule List of people consulted List of documents consulted Data collection tools applied Reports of country visits and case studies which formed part of

the independent evaluation and have been drawn upon to produce the main report

Details of the members of the evaluation team

Source: DFID 2005 (adapted by Flores J. 2010)


Recommended