IMPORTANT LEGAL NOTICE - The information on this site is subject todisclaimercopyright notice
Evaluation
  EUROPA > European Commission > EuropeAid > Evaluation > Methods > How
Last updated: 09/11/2005
EuropeAid

Methodological bases
Evaluation process (how?)

EuropeAid
 

Methodological design

 


Methodology
• Evaluation Guidelines
• Methodological bases
• Evaluation tools
• Examples
• Glossary
• Sitemap

Methods
• What
• When
• Why
• Who
• How

How
• Overview
• Strategy
• Questions
• References
• Design
• Data collection
• Analysis
• Judgment
• Quality
assurance

Design
• Overview
• Design tables
• Overall design
• Developing tools
• Feasibility

 
 

What is the purpose?

To set up the method that will allow the external evaluation team to answer the questions and to come to an overall assessment. In addition to selected questions, judgement criteria, indicators and targets, the evaluation method includes:

  • a strategy for collecting and analysing data
  • selected investigation areas
  • a series of specifically designed tools
  • a work plan.

Top

When does it take place?

The evaluation team starts designing a provisional method as early as from the drafting of its proposal in order to draw up cost estimates. A key assumption at this stage is the extent to which the evaluation will rely on secondary data or will involve specific data collection work in the field.
The main frame of the method is then established during the inception stage, in line with the evaluation questions, judgement criteria, indicators, data collection tools and analysis strategy.
The method is refined and finalised before the field phase and fully described in the first phase report (desk).
The final report includes a short and sharp presentation of the evaluation method, together with its limitations, if there are any. The method is fully described in annex, including initial design, problems encountered, solutions found, method actually implemented, and limitations.
The evaluation method is designed through an iterative process at three levels:

  • A question-by-question approach, allowing the evaluation team to prepare the design tables with an aim to adequately answer each question.
  • An overall approach which cuts across the questions, and which allows the evaluation team to optimise the evaluation method as a whole, whilst matching time and resource constraints.
  • A series of specifically developed tools.

Top

Design tables per question

A design table is developed for each question, with the following lines

  • Text of the question
  • Reasons why the question has been asked
  • Scope of the question
  • Judgement criterion (or criteria)
  • Indicators
  • Targets
  • Sub-questions, describing the chain of reasoning through which the evaluation team plans to answer the question, for instance:
    • How have indicators changed over the evaluated period?
    • To what extent can the change be qualified as a success?
    • How far have EC activities contributed to explain the observed change? Through which mechanisms? Is there evidence that such mechanisms have been working as assumed?
    • How far does evidence support alternative explanations? Is there evidence that the observed change is due to other development partners, to other EC policies, to the Government, or to other external factors?
  • Analysis strategy, describing which kinds of data are to be collected and processed in order to answer the question, for instance:
    • Change analysis, which measures and/or qualifies indicators, and compares them over time and against targets
    • Meta-analysis, which extrapolates upon findings of other evaluations and studies, after having carefully checked their validity and transferability
    • Attribution analysis, which compares the observed changes and a "without intervention" scenario, also called counterfactual
    • Contribution analysis, which confirms or disconfirms cause-and-effect assumptions on the basis of a chain of reasoning.
  • Investigation areas, where data are to be collected, for instance:
    • Countries to be visited in the case of a thematic evaluation.
    • Projects to be investigated in-depth in order to answer a sector-based question.
    • Territories (districts, villages) where samples of beneficiaries will be surveyed.
  • Main information sources to be used in order to answer the question, and associated tools.

Top

Overall design

The evaluation team draws up the list of all evaluation tools suggested in the design tables, and confirms that each tool envisaged is adequate in the sense of fulfilling the required function, of being in line with the selected analysis strategy, and of fitting into the identified constraints.
Each tool is also considered from the viewpoint of its capacity to help answering several questions in a cross-cutting way, and to be combined with other tools.
In the process of progressively optimising the methodological design, the evaluation team examines all the design tables in a cross-cutting manner, with a view to preparing the final synthesis, i.e. an overall assessment that draws upon the answers to all evaluation questions. Additional tools may be developed specifically for that purpose.
In the process of designing its method, the evaluation team tries to adequately allocate resources between questions and sub-questions. Some questions deserve to be addressed with costly tools, whilst others should rather be answered on the basis of only a documentary analysis and a few interviews with EC and Government officials.
The evaluation team then explains and justifies its main technical choices, with alternative options, pros and cons, and associated risks.

Top

Developing tools

This site presents a dozen frequently used tools, with examples of use in the context of development aid evaluation.
Each tool involves a preparatory stage like writing an interview guide, drafting and testing a questionnaire, etc. Developing tools proceeds through all or some of the following steps:

  • List of questions and sub-questions that the tool should help to answer
  • Technical specifications for implementing the tool
  • Foreseeable risks which may compromise or weaken the implementation of the tool and how to deal with them
  • Mode of reporting within the evaluation team
  • Mode of reporting in the final report
  • Responsibilities in implementing the tool
  • Quality criteria and quality control process
  • Time schedule
  • Resources allocated.

The data collection and analysis work plan encompasses all the evaluation tools. It is progressively refined in order to facilitate co-ordination within the evaluation team as well as with the evaluation manager and key stakeholders. However, it remains flexible enough in order to accommodate last minute difficulties in the field.

Top