IMPORTANT LEGAL NOTICE - The information on this site is subject todisclaimercopyright notice
  EUROPA > European Commission > EuropeAid > Evaluation > Methods > How
Last updated: 08/02/2006

Methodological bases
Evaluation process (How?)


Evaluation questions


• Evaluation Guidelines
• Methodological bases
• Evaluation tools
• Examples
• Glossary
• Sitemap

• What
• When
• Why
• Who
• How

• Overview
• Strategy
• Questions
• References
• Design
• Data collection
• Analysis
• Judgment
• Quality

• Overview
• Selection
• Preparation
• Examples



to focus the evaluation work

Evaluation questions focus the evaluation work on a limited number of key points, thus allowing better reflection on judgement criteria (also called reasoned assessment), more targeted data collection, more in-depth analysis and a more useful report.

Focusing an evaluation on several key points is particularly useful when the evaluated intervention is multidimensional. In that case, if one wished to study all the dimensions of aid through all evaluation criteria, the work would be extremely costly or else very superficial. Thus, choices have to be made.


to master the choices

When the evaluated intervention is multidimensional, the final report and its summary necessarily focus on a limited number of points, through a series of choices. Evaluation questions serve to discuss and master these choices.

In the absence of questions, choices are likely to be implicit and at risk of being biased. They may be made in relation to the most easily available information, the weight of the various stakeholders or the evaluation team's preconceptions.

to favour take-up

If choices have to be made, it is preferable to make them in relation to the expectations of the targeted audience of the report. This favours take-up and increases the likelihood of the work being used.



Questions inferred directly from the logic

The intervention logic is the set of all the assumptions made to explain how the intervention will meet its objectives and produce the expected effects. Such assumptions link activities to outputs and results and then to several successive levels of expected impacts, through to the most global ones such as poverty reduction.

The intervention logic is reconstructed on the basis of the objectives set out in the official documents that established the intervention. Whenever the objectives are stated in political terms lacking precision, they are translated as concretely as possible in the form of expected effects. Effects which are expected implicitly can also be taken into account.

Depending on the degree of heterogeneity of the intervention, the number of expected effects can vary from a few to over one hundred. The intervention logic presents expected effects in one of the following forms:

  • Diagram of expected effects (multidimensional intervention with a dozen or more expected effects).
  • Global diagram and sub-diagrams (highly multidimensional intervention)

Once the evaluation team has identified the expected effects and the cause-and-effect assumptions between them, it is possible to ask all types of question such as:

  • To what extent has [activity A] contributed to [generating effect X]?
  • To what extent have [activities A, B, C, etc..] contributed to [generating effect X]?
  • To what extent have [activities A, B, C, etc.] contributed to [generating effects X, Y, Z, etc.]?

These questions are directly derived from the intervention logic in a standard form and belong to the effectiveness family (achieving one or more expected effects).


Other questions inferred from the intervention logic (indirectly)

Each standard question can be reformulated in many ways:
... for instance by specifying the scope:

  • To what extent has the intervention [and more specifically the one implemented by means of tool A or procedure B] contributed towards generating effect X?
  • To what extent has [coordination with the other development partners] contributed towards generating effect X?

... or by specifying the effect concerned:

  • To what extent has the intervention contributed towards generating effect X [for the poorest population groups] ?

... or by changing the evaluation criterion:

  • To what extent has the intervention generated effect X [with a strong probability of survival of effects after the end of the aid]? (sustainability)
  • To what extent has the intervention contributed towards generating effect X [with limited costs compared to ] ?
  • When the intervention targets effect X, to what extent does it do so [does it correspond to the needs of the population concerned]? (relevance)
  • When the intervention targets effect X, to what extent does it do so [is it compatible with or contrary to the objectives of other EU policies] ? (coherence/complementarity).
  • When the intervention targets effect X, to what extent does it do so [does it add value compared to a similar intervention implemented by member States] ? (Community value added).


Other questions

The following questions do not require a preliminary examination of the intervention logic because they concern effects that are not featured in it.

Questions on unexpected impacts, for example:

  • To what extent has [activity A, tool B, procedure C] generated unexpected effects? If it has, who has benefited or lost out?

Questions on cross-cutting issues such as environment, good governance or human rights, for example:

  • To what extent has the EC integrated [cross-cutting issue A] into the design and implementation of interventions?



This site proposes examples of questions for the following situations:



Due to certain technical limits it is not possible to consider multiple questions or, more precisely, to provide sound answers to a large number of questions. This guide recommends selecting a maximum of ten key questions.

Questions must be selected for the probable usefulness of answers that the evaluation will provide, such as:

  • The answer interests those agents who have to design a new intervention for the following cycle.
  • The answer will probably provide useful lessons applicable to other sectors or countries.
  • The answer will be useful in a Commission report for accountability purposes.
  • The answer is not known in advance.
  • The answer will be obtained later.

Questions also have to be selected for their feasibility (or evaluability). Feasibility is low if the question involves too many problems, for example:

  • The main terms of the question have not been stabilised and are not understood in the same way by everyone.
  • Relations of cause and effect are vague and the experts are unfamiliar with them.
  • Available information is very poor.
  • Access to the field and new data collection will involve major problems.

If the feasibility is low an the question corresponds to a political demand, then the reasons for the question is difficult to answer have to be set out in detail. This is part of the utility of the evaluation.



When drafting a question, first ensure that it concerns evaluation and not auditing or monitoring.

Then specify:

  • The scope, that is, what is judged, for example the intervention as a whole, an activity, instrument or procedure.
  • The effect or effects concerned.
  • The family of evaluation criteria to which the question belongs: relevance, effectiveness, efficiency, sustainability, impact, Community value added, or coherence.

At this stage ensure that the question is worded clearly and concisely.

For information, comments on all or some of the following points can be added:

  • Details on the scope of the question.
  • Definition of the main terms used.
  • Details on the type of criterion.
  • Sub-questions to consider in answering the question.
  • Nature of intended use
  • Studies and evaluations not to duplicate.



The evaluation manager defines the main areas to be covered by the evaluation and sets the maximum number of questions. These specifications are included in the draft terms of reference.

The members of the reference group are consulted before finalising the terms of reference.

The external evaluation team analyses the intervention logic. On the basis of the terms of reference and its analysis, it proposes a draft list of questions.

The questions are presented and discussed during the reference group's inception meeting. On the basis of comments made during and after the meeting, the evaluation team finalises the questions. These are then validated by members of the group and by the evaluation manager.

The validated questions are considered contractual and are attached to the terms of reference.