IMPORTANT LEGAL NOTICE - The information on this site is subject todisclaimercopyright notice
  EUROPA > European Commission > EuropeAid > Evaluation > Methodology > Basics > How?
Last updated: 12/12/2005

Methodological bases
Evaluation process (How?)
Evaluation questions


Focus the evaluation on key questions


• Evaluation Guidelines
• Methodological bases
• Evaluation tools
• Examples
• Glossary
• Sitemap

• What
• When
• Why
• Who
• How

• Overview
• Strategy
• Questions
• References
• Design
• Data Collections
• Analysis
• Judgment
• Quality assurance

• Overview
• Selection
• Preparation
• Examples


What is about the purpose?

There are technical limitations that make it impossible to answer multiple questions or, more precisely, to provide quality answers to an excessive number of questions. This guide recommends a maximum of ten questions.

How to choose the questions

Identify questions

A first version of the evaluation questions is proposed on the basis of:

  • The analysis of the intervention logic.
  • The analysis of the intervention rationale.
  • Issues that justified the decision to launch the evaluation.
  • Issues to be studied, as stated in the terms of reference.
  • Questions raised in the ex ante evaluation, where relevant.

In a second version, the list and wording of the evaluation questions also take into account:

  • Issues raised by key informants at the start of the evaluation.
  • Expectations of the members of the reference group.

Assess the potential usefulness of answers

Assuming that a question will be properly answered, it is necessary to assess the potential usefulness of the answer, by considering the following points:

  • Who is to use the answer?
  • What is the expected use: knowledge, negotiation, decision-making, communication?
  • Will the answer arrive in time to be used?
  • Is the answer not already known?
  • Is there not another study (audit, review) underway, likely to provide the answer?

If the choice of questions has to be discussed in a meeting, it may be useful to classify them in three categories of potential utility: higher, medium, lower.

Check that nothing important has been overlooked

Experience has shown that it is most harmful to the quality of the evaluation if the following type of question is left out:

  • Questions on efficiency and sustainability.
  • Questions concerning negative effects, especially if those effects concern underprivileged groups.
  • Questions concerning very long-term effects.

Assess the feasibility of questions

The feasibility (evaluability) of a question should be examined, but always after its usefulness. For this purpose the following should be consulted:

  • The service managing the intervention.
  • One or more experts in the field.
  • One or more evaluation professionals.

If the choice of questions has to be discussed in a meeting, it may be useful to classify them in three categories:

  • Strong probability of obtaining a quality answer.
  • Average probability.
  • Weak probability.

If a question is potentially very useful but difficult to answer, check whether a similar question would not be easier and equally useful. For example, if a question concerns a relatively distant or global impact, its feasibility could probably be improved by focusing on the immediately preceding impact in the intervention logic.

Discuss the choice of questions

The choice of questions is discussed at the inception meeting.

The selection is more likely to be successful if potential users have been consulted and have agreed on the selected questions, and if no legitimate point of view has been censored.


Reasons for selecting a question

Because someone raised it

Someone who proposes a question tends to cooperate in answering it and in accepting the conclusions.

It is therefore preferable to select questions clearly requested by the actors concerned, for example:

  • Authorities or services of the Commission, especially those who participate in the reference group.
  • Key informants consulted by the evaluation manager or the evaluation team.

An actor may ask a question primarily with the intention of influencing or even obstructing the action of another actor. The potential usefulness of this type of question has to be examined carefully.

Because it is useful

A question is particularly useful if:

  • The intervention or one of its aspects is innovative and several actors expect a validation.
  • A decision is going to be taken and the conclusions may arrive in time to help in taking that decision.
  • A public debate is planned and the conclusions may be ready in time to feed into the debate.

Because the answer is not known

A question is useless if:

  • Another evaluation, an audit or a study has already answered it.
  • It has already been asked in many other evaluations and has always had the same answer.

It may nevertheless be useful to ask the question again if the answer requires verification.

Assessing the overall intervention through a limited number of questions

Focusing on questions does not prevent one from drawing conclusions on the intervention as a whole. On the contrary, it makes it possible to formulate an overall assessment which builds upon professional data collection and analysis, and avoids the risk of being superficial and impressionistic.

This can be explained with the analogy of oil exploration. One cannot discover oil by just looking at the surface of the earth. Oil seekers need to explore the underground. The same applies to an intervention being evaluated. The surface of things is visible through reporting, monitoring information, and change in indicators, but what needs to be discovered remains invisible, e.g. the EC's contribution to changes, sustainability, etc.

Evaluation questions can be compared with the oil seekers' exploratory wells. Each evaluation question provides a narrow but in-depth view into what is usually invisible. By synthesising what has been learnt by answering the questions, it becomes possible to provide an overall assessment of the intervention. The process can be compared to that of mapping oil fields after an exploratory drilling campaign.


Examples of questions selected for the evaluation: