IMPORTANT LEGAL NOTICE - The information on this site is subject todisclaimercopyright notice
Evaluation
  EUROPA > European Commission > EuropeAid > Evaluation > Methodology > Guidelines > Project-programme
Last updated: 06/07/2005
EuropeAid

Project / programme evaluations
Guidelines for the evaluation team

EuropeAid
 

Detailed guidelines

 


Methodology
• Evaluation Guidelines
• Methodological bases
• Evaluation tools
• Examples
• Glossary
• Sitemap

These guidelines
• Overview
• Manager
• Evaluator

 
  PREPARATORY PHASE

The external evaluation team is engaged via the applicable tendering/contracting procedure. The candidate contractor prepares a proposal in response to the Terms of Reference (ToR) issued by the commissioning service.

Basic assumptions

On the basis of the ToR and his/her own expertise, the author of the proposal formulates basic assumptions on:

  • Areas requiring specific expertise.
  • Possibility to mobilise consultants with the right profile in the country or countries involved.
  • Number, nature and probable difficulty of the evaluation questions.
  • Existence, quality and accessibility of management and monitoring data.
  • Existence of previous evaluations which may be reused.

Tasks, expertise and budget

Assumptions are made about the evaluation design, including analysis strategy and tools to be applied. The tasks are provisionally divided among:

  • Consultants from partner country or countries and international consultants.
  • Senior, medium, junior consultants.
  • Experts in the sector(s) of the project/programme and professional evaluation consultants.

The core evaluation team members are identified and the absence of conflict of interest is verified.

Both the budget and the time schedule are specified within the framework of constraints set by the ToR.

Hiring local consultants

Local consultants may be entrusted with all or part of the evaluation tasks.

Benefits:
  • Possibility to involve a local perspective in data collection but also in data analysis.
  • Mastery of local language(s).
  • Easy use of participatory approaches involving beneficiaries and targeted people.
  • Flexibility of work plan and reduction of travel costs.
  • Contribution to building an appropriate evaluation capacity in the partner country.
Risks:
  • Conflict of interest.
  • Difficulty of being independent from the Government in some countries

Proposal

The contents of the technical and financial proposal are as follows:

  • Understanding of the context, purpose, and intended users of the evaluation.
  • Understanding of the themes or questions to be covered.
  • Indicative methodological design.
  • Core evaluation team members, their field of expertise and their role.
  • Time schedule.
  • Detailed price.
  • CVs in the standard format and declarations of absence of conflict of interest.
  • CV of an expert from outside the evaluation team who will be in charge of quality control.
Top

  INCEPTION STAGE

The inception stage starts as soon as the evaluation team is engaged, and its duration is limited to a few weeks.

Collecting basic documents

One of the team members collects the set of basic official documents such as:

  • Programming documents (e.g. project fiche), and subsequent modifications if there are any.
  • Ex ante evaluation.
  • EC documents setting the policy framework in which the project/programme takes place (EC development and external relations policy, EU foreign policy, country strategy paper.
  • Government strategy (PRSP).

Logic of the project/ programme

The evaluation team reviews the logical framework as set up at the beginning of the project/programme cycle. In the absence of such a document, the project/programme manager has to construct one retrospectively. As far as necessary, the evaluation team identifies the points which need clarification and/or updating. Any clarification, updating or reconstruction is reported in a transparent way.

The analysis of the project/programme logic covers:

  • Context in which the project/programme has been launched, opportunities and constraints.
  • Needs to be met, problems to be solved and challenges to be addressed.
  • Justification of the fact that the needs, problems or challenges could not be addressed more effectively within another framework.
  • Objectives.
  • Nature of inputs and activities.

Of particular importance are the various levels of objectives and their translation into various levels of intended effects:

  • Operational objectives expressed in terms of short-term results for direct beneficiaries and/or outputs (tangible products or services).
  • Specific objective (project purpose) expressed in terms of sustainable benefit for the target group.
  • Overall objectives expressed in terms of wider effects.

Once the analysis has been performed on the basis of official documents, the evaluation team starts interacting with key informants in the project/programme management and EC services. Comments on the project/programme logic are collected.

Delineating the scope

The scope of the evaluation includes all resources mobilised and activities implemented in the framework of the project/programme (central scope).

In addition, the evaluation team delineates a larger perimeter (extended scope) including the main related actions like:

  • Other EC policies, programmes or projects, plus EU policies.
  • Partner country's strategy (PRSP), or sector policy or programme.
  • Other donors' interventions.

An action is included in the perimeter as far as it reaches the same groups as the evaluated project/programme does.

Management documents

The evaluation team consults all relevant management and monitoring documents/data bases so as to acquire a comprehensive knowledge of the project/programme covering:

  • Full identification.
  • Resources planned, committed, used.
  • Progress of outputs.
  • Names and addresses of potential informants.
  • Ratings attributed through the "result-oriented monitoring" system.
  • Availability of progress reports and evaluation reports.

Evaluation questions

The evaluation team establishes the list of questions on the following bases:

  • Themes to be studied, as stated in the ToR.
  • Logical framework.
  • Reasoned coverage of the seven evaluation criteria

Evaluation criteria

The following evaluation criteria correspond to the traditional practice of evaluating development aid, formalised by the OECD-DAC (the first five criteria), and to the specific EC requirements (the last two criteria).

Evaluation criteria

Relevance:
  • The extent to which the objectives of a development intervention are consistent with beneficiaries' requirements, country needs, global priorities and partners' and donors' policies
Effectiveness:
  • The extent to which the development intervention's objectives were achieved, or are expected to be achieved, taking into account their relative importance.
Efficiency:
  • A measure of how economically resources/inputs (funds, expertise, time, etc.) are converted to results.
Sustainability:
  • The continuation of benefits from a development intervention after major development assistance has been completed. The probability of continued long-term benefits. The resilience to risk of the net benefit flows over time.
Impact:
  • Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.
Coherence/complementarity:
  • This criterion may have several dimensions:
    1) Coherence within the Commission's development programme
    2) Coherence/complementarity with the partner country's policies and with other donors' interventions
    3) Coherence/complementarity with the other Community policies
Community value added:
  • The extent to which the project/programme adds benefits to what would have resulted from Member States' interventions in the same context.

Each question is commented on in line with the following points:

  • Origin of the question and potential utility of the answer
  • Clarification of the terms used
  • Indicative methodological design (updated), foreseeable difficulties and feasibility problems if any.

This site presents a list of typical questions associated with evaluation criteria.

Menu of evaluation questions 

Inception meeting

The evaluation team leader presents the work already accomplished, in a reference group meeting. The presentation is supported by a series of slides and by a commented list of evaluation questions.

Specific guidance in the case of:

Inception report

The evaluation team prepares an inception report which recalls and formalises all the steps already taken, including an updated list of questions in line with the comments received.

Each question is further developed into:

  • Indicators to be used for answering the question, and corresponding sources of information
  • Strategy of analysis
  • Sub-questions.

Indicators

The logical framework preferably includes "Objectively Verifiable Indicators (OVIs)" and "Sources of Information" which are useful for structuring the evaluators' work. In so far as OVIs have been properly monitored, including baseline data, they become a major part of the factual basis of the evaluation.
Indicators may also be available through a performance assessment framework, if the project /programme is linked with such a framework.
Indicators may also be developed in the framework of the evaluation as part of a questionnaire survey, an analysis of a management database, or an analysis of statistical series.
Indicators may be quantitative or qualitative.

Analysis strategy

Indicators and other types of data need to be analysed in order to answer evaluation questions.

Four strategies of analysis can be considered:

  • Change analysis, which compares indicators over time and/or against targets
  • Meta-analysis, which extrapolates upon findings of other evaluations and studies, after having carefully checked their validity and transferability
  • Attribution analysis, which compares the observed changes with a "without intervention" scenario, also called counterfactual
  • Contribution analysis, which confirms or invalidates cause-and-effect assumptions on the basis of a chain of reasoning.

The first strategy is the lightest one and may fit virtually all types of question. The three last strategies are better at answering cause-and-effect questions.

Indicators, sources of information and sub-questions remain provisional at this stage of the process. However, the inception report includes a detailed work plan for the next step. The report needs to be formally approved in order to move to the next step.


Top

  FINALISATION OF FIRST PHASE (DESK)

This stage may vary in length, depending on the number of documents to be analysed.

Documentary analysis

The evaluation team gathers and analyses all available documents (secondary data) that are directly related to the evaluation questions:

  • Management documents, reviews, audits.
  • Studies, research works or evaluations applying to similar projects/programmes in similar contexts.
  • Statistics
  • Any relevant and reliable document available through the Internet.

This is by no means a review of all available documents. On the contrary, the evaluation team only looks for what helps it to answer the evaluation questions.

Interviewing managers

Members of the evaluation team undertake interviews with people who are or have been involved in the design, management and supervision of the project/programme. Interviews cover project/programme management, EC services, and possibly key partners in the country or countries concerned.

At this stage the evaluation team synthesises its provisional findings into a series of first partial answers to the evaluation questions. Limitations are clearly specified as well as issues still to be covered and assumptions still to be tested during the field phase.

Designing the method

The methodological design envisaged in the inception report is finalised. The evaluation team refines its approach to each question in a design table.

Design tables per question

The first lines of the table recall the text of the question, plus a comment on why the question was asked, and a clarification of the terms used, if necessary. The table then specifies the indicators and the analysis strategy.

The following lines develop the chain of reasoning through which the evaluation team plans to answer the question. The chain is described through a series of sub-questions which are to be answered by the evaluation team, for instance in order:

  • to inform on change in relation to the selected indicators
  • to assess causes and effects
  • to assist in the formulation of value judgements.

Sub-questions are associated with information sources and evaluation tools.

There are usually several versions of the design table:

  • Preliminary version appended to the inception report
  • Successive draft versions prepared during the desk phase as far as the methodological design is progressively optimised
  • Final version attached to the desk report.


Top

 

Developing tools

The tools to be used in the field phase are developed. Tools range from simple and usual ones like database extracts, documentary analyses, interviews or field visits, to more technical ones like focus groups, modelling, or cost benefit analysis. This site describes a series of tools that are frequently used.

The evaluation toolbox

When designing its work plan, the evaluation team may usefully consult the section of these guidelines devoted to the evaluation toolbox. This guide includes specific explanations, recommendations and examples on how to select and implement evaluation tools. It also proposes a quality assessment grid specific to each tool.

However, it must be stressed that this guide has been prepared for evaluations at higher levels (country, region, global) and might need some translation when used in the context of project/programme evaluation.

The evaluation team relies upon an appropriate mix of tools with an aim to:

  • Cross-checking information sources
  • Making tools reinforce one another
  • Matching the time and cost constraints.

Each tool is developed through a preparatory stage which covers all or part of the following items:

  • List of sub-questions to be addressed with the tool.
  • Technical specifications for implementing the tool.
  • Foreseeable risks which may compromise or weaken the implementation of the tool and how to deal with them.
  • Mode of reporting within the evaluation team and in the final report.
  • Responsibilities in implementing the tool.
  • Quality criteria and quality control process.
  • Time schedule.
  • Resources allocated.

From evaluation questions to interviews

The evaluation questions and sub-questions should not be copied and pasted into interview guides or questionnaires.

Evaluation questions are to be answered by the evaluation team, not by stakeholders.

The evaluation team may build upon stakeholders' statements, but only through a careful cross-checking and analysis.

First phase report (desk)

The team writes a draft version of the first phase report (desk) which recalls and formalises all the steps already taken. The report includes at least three chapters:

  • A question-by-question chapter including the information already gathered and limitations if there are any, a first partial answer, the issues still to be covered and the assumptions still to be tested, and the final version of the design table.
  • An indicative approach to the overall assessment of the project/programme.
  • The list of tools to be applied in the field phase, together with all preparatory steps already taken.

If required, the evaluation team presents the work already accomplished, in a reference group meeting. The presentation is supported by a series of slides.


Top

  FIELD PHASE

The duration of this phase is typically a matter of weeks when the works are carried out by international experts. The time-frame can be extended if local consultants are in charge, with subsequent benefits in terms of in-depth investigation and reduced pressure on stakeholders.

Preparation

The evaluation team leader prepares a work plan specifying all the tasks to be implemented, together with responsibilities, time schedule, mode of reporting, and quality requirements.

The work plan is kept flexible enough to accommodate for last minute difficulties in the field.

The evaluation team provides key stakeholders in the partner country with an indicative list of people to be interviewed, surveys to be undertaken, dates of visit, itinerary, name of responsible team members.

Interviewing and surveying outsiders

A key methodological issue is how far the project/programme objectives were achieved in terms of the benefits for the targeted group and wider impact. Achievement of objectives is therefore to be judged from the side of the beneficiaries' perceptions of benefit received, rather than from the managers' perspective of outputs delivered or results. Consequently, interviews and surveys should focus on outsiders (beneficiaries and other affected groups beyond beneficiaries) as well as insiders (managers, partners, field level operators). The work plan should clearly state the planned proportion of insiders and outsiders concerned by interviews and surveys.

Surveying outsiders may require that language and/or cultural gaps be bridged.

Specific guidelines in the case of:

Initial meeting

Where relevant, the evaluation team proposes an information meeting in the country/area within the first days of the field work. The following points are covered:

  • Presentation and discussion of the work plan.
  • How to access data and key informants.
  • How to deal with and solve potential problems.

Data collection and analysis

The evaluation team implements its field data collection plan. Any difficulties which arise are immediately discussed in the team. Wherever necessary, solutions are discussed with the evaluation manager.

Ethical behaviour in collecting data

The evaluation team has a responsibility not only towards the commissioning body, but also towards groups and individuals involved with or affected by the evaluation, which means that the following issues should be taken into careful consideration:

  • Interviewers should ensure that they are familiar with and respectful of interviewees' beliefs, manners and customs.
  • Interviewers must respect people's right to provide information in confidence, and ensure that sensitive data cannot be traced to its source.
  • Local members of the evaluation team should be left free to either endorse the report or not. In the latter case, their restricted role is clearly described in the report.
  • The evaluation team should minimise demands on interviewees' time.
  • While evaluation team members are expected to respect other cultures, they must also be aware of the EU's values, especially as regards minorities and particular groups, such as women. In such matters, the United Nations Universal Declaration of Human Rights (1948) is the operative guide.
  • Evaluation team members have a responsibility to bring to light issues and findings which relate indirectly to the Terms of Reference.
  • Evaluations sometimes uncover evidence of wrongdoing. What should be reported, how and to whom are issues that should be carefully discussed with the evaluation manager.

It must be clear to all evaluation team members that the evaluation is neither an opinion poll nor an opportunity to express one's preconceptions. Field work is meant to collect evidence and is as strong as possible, i.e.

  • Direct observation of facts including track records, photographs, etc. (strongest).
  • Statements by informants who have been personally involved.
  • Proxies, i.e. observation of facts from which a fact at issue can be inferred.
  • Indirect reporting on facts by informants who have not been personally involved (weakest).

Preventing and correcting biases

The evaluation team members are constantly aware of potential biases like:

  • Confirmation bias, i.e. tendency to seek out evidence that is consistent with the expected effects, instead of seeking out evidence that could disprove them.
  • Empathy bias, i.e. tendency to create a friendly (empathetic) atmosphere, at least for the sake of achieving a high rate of answers and a fast completion of interviews, with the consequence that interviewees make over-optimistic statements about the project/programme.
  • Self-censorship, i.e. reluctance of interviewees to freely express themselves and to depart from the views of their institution or hierarchy, simply because they feel at risk.
  • Strategy of interviewees, i.e. purposely distorted statements with a view to attracting evaluation conclusions closer to their opinions.
  • Question-induced answers, i.e. answers are distorted by the way questions are asked or the interviewer's reaction to answers.

The evaluation team improves the reliability of data by:

  • Asking open questions, which prevents confirmation bias
  • Mixing positive and negative questions, which prevents empathy bias and question bias
  • Constantly focusing on facts, which allows for subsequent cross-checking of data and prevents interviewees from have a strategy bias
  • Promising anonymity (and keeping the promises), which prevents interviewees' self censorship.

Quality control

The evaluation team leader checks the quality of data and analyses, against quality criteria set for each tool, and against general principles like:

  • Clear presentation of the method actually implemented
  • Compliance with work plan and/or justification for adjustments
  • Compliance with anonymity rules
  • Self assessment of the reliability of data and validity of analyses.

Debriefing

The evaluation team meets at a debriefing meeting at the end of the field phase. It undertakes to review its data and analyses, to cross-check sources of information, to assess the strength of the factual base, and to identify the most significant findings.

Another debriefing meeting is held with the reference group in order to discuss reliability and coverage of data collection, plus significant findings.

The evaluation team presents a series of slides related to the coverage and reliability of collected data, and to its first analyses and findings. The meeting is an opportunity to strengthen the evidence base of the evaluation. No report is submitted in advance and no minutes are provided afterwards.

Specific guidance in the case of:

Top

  SYNTHESIS PHASE

Findings

The evaluation team formalises its findings, which all derive from facts, data, interpretations and analyses. Findings may include cause-and-effect statements (e.g. "partnerships, as they were managed, generated lasting effects"). Unlike conclusions, findings do not involve value judgments.

The evaluation team proceeds with a systematic review of its findings with a view to confirming them. At this stage, its attitude is one of systematic self criticism, e.g.:

  • If statistical analyses are used, do they pass validity tests?
  • If findings arise from a case study, do other case studies contradict them?
  • If findings arise from a survey, could they be affected by a bias in the survey?
  • If findings arise from an information source, does cross-checking show contradictions with other sources?
  • Could findings be explained by external factors independent from the project / programme under evaluation?
  • Do findings contradict lessons learnt elsewhere and if so, is there a plausible explanation for that?

Conclusions

The evaluation team answers the evaluation asked through a series of conclusions which derive from facts and findings. In addition, some conclusions may relate to other issues which have emerged during the evaluation process.

Conclusions involve value judgements, also called reasoned assessments (e.g. "partnerships were managed in a way that improved sustainability in comparison to the previous approach"). Conclusions are justified in a transparent manner by making the following points explicit:

  • Which aspect of the project/programme is assessed?
  • Which evaluation criterion is used?
  • How is the evaluation criterion actually applied in this precise instance?

The evaluation team strives to formulate conclusions in a limited number so as to secure their quality. It either clarifies or deletes any value judgement which is not fully grounded in facts and entirely transparent.

The evaluation team manages to use evaluation criteria in a balanced way, and pays special attention to efficiency and sustainability, two evaluation criteria which tend to be overlooked in many instances.

The evaluation team synthesises its conclusions into an overall assessment of the project/programme, and writes a summary of all conclusions, which are prioritised and referred to findings and evidence. Methodological limitations are mentioned, as well as dissenting views if there are any.

The evaluation team leader verifies that the conclusions are not systematically biased towards positive or negative views. He/she also checks that criticisms may lead to constructive recommendations.

Recommendations and lessons

The evaluation team maintains a clear distinction between conclusions which do not entail action (e.g. "partnerships were managed in a way that improved sustainability in comparison to the previous approach") and other statements which derive from conclusions and which are action-oriented, i.e.

  • Lessons learnt (e.g. "the successful way of managing partnerships could be usefully considered in other countries with similar contextual conditions")
  • Recommendations (e.g. "the successful way of managing partnerships should be reinforced in the next programming cycle").

Recommendations may be presented in the form of alternative options with pros and cons.

As far as possible, recommendations are :

  • Tested in terms of utility, feasibility and conditions of success
  • Detailed in terms of time frame and audience
  • Clustered and prioritised.

The evaluation team acknowledges clearly where changes in the desired direction are already taking place, in order to avoid misleading readers and causing unnecessary offence.

Draft report

The evaluation team writes the first version of the report which has the same size, format and contents as the final version. Depending on the intended audience, the report is written:

  • With or without technical terminology.
  • With either a summarised or a detailed presentation of the project/programme and its context.

In general, the report includes a 2 to 5-page executive summary, a 40 to 60-page main text, plus annexes.

Structure of the report

Executive Summary

  • The executive summary is a dense, self-standing document which presents the project/programme under evaluation, the purpose of the evaluation, the main information sources and methodological options, and the key conclusions, lessons learned and recommendations.

Tables of contents, figures, acronyms

Introduction

  • Description of the project/programme and the evaluation. The reader is provided with sufficient methodological explanations to gauge the credibility of the conclusions and to acknowledge limitations or weaknesses if there are any.

Answered questions

  • A chapter presents the evaluation questions, together with evidence, reasoning and value judgements pertaining to them. Each question is given a clear and short answer.

Overall assessment

  • A chapter synthesises all answers to evaluation questions in an overall assessment of the project/programme. The evaluation team should not just follow the evaluation questions, the logical framework, or the evaluation criteria. On the contrary, it should articulate all the findings, conclusions and lessons in a way that reflects their importance and facilitates the reading.

Conclusions, lessons and recommendations

  • Conclusions and lessons are listed, clustered and prioritised in a few pages, as are recommendations.

Annexes

The evaluation team leader checks that the report meets the quality criteria. The report is submitted to the person in charge of the quality control before it is handed over to the evaluation manager.

Quality certificate

The evaluation team leader attaches a quality certificate to the draft final report, indicating the extent to which:

  • Evaluation questions are answered.
  • Reliability and validity limitations are specified.
  • Conclusions apply evaluation criteria in an explicit and transparent manner.
  • The present guidelines have been used.
  • Tools and analyses have been implemented according to standards.
  • The language, layout, illustrations, etc. are according to standards.

The evaluation team leader and the evaluation manager discuss the quality of the report. Improvements are made if requested.

Specific guidance in the case of a multi-country programme

Discussion of draft report

The evaluation team presents the report in a reference group meeting. The presentation is supported by a by a series of slides which cover:

  • Answered questions and methodological limitations
  • Overall assessment, conclusions and lessons learnt
  • Recommendations.

Comments are collected in order to:

  • Further check the factual basis of findings and conclusions
  • Check the transparency and impartiality
  • Check the utility and feasibility of the recommendations

Discussion seminar

At this stage, the evaluation manager may decide to convene a discussion seminar with a wide range of stakeholders. The purpose would be to discuss the content of the conclusions and the utility of the recommendations in the presence of the evaluation team. Attendance may include the delegation staff, national authorities, civil society, project management, other donors and/or experts.

Participants are provided with an updated draft report.

Finalising the report

The evaluation team finalises the report by taking into account all comments received. Annexes are also finalised in one or the following forms:

  • Printed attachments to the report.
  • Annexes on CDROM.

Annexes

Terms of reference.
List of activities specifically assessed.
Logical framework and comments.
Detailed evaluation method including :

  • Options taken, difficulties encountered and limitations.
  • Detail of tools and analyses.
  • List of interviews.
  • List of documents used.

Any other text or table which contains facts used in the evaluation.

The report is printed out according to the instructions stated in the terms of reference.

The evaluation team leader receives a final quality assessment from the manager. If necessary, he/she writes a note setting forth the reasons why certain requests for quality improvement have not been accepted. This response will remain attached to both the quality assessment and the report.


Top