What does this mean?
The evaluation team may use any kind of reliable data to assess whether an intervention has been successful or not in relation to a judgement criterion and a target.
Data may be collected in a structured way by using indicators. Indicators specify precisely which data are to be collected. An indicator may be quantitative or qualitative. In the latter case the scoring technique may be used.
Unstructured data are also collected during the evaluation, either incidentally, or because tools such as case studies are used. This kind of evidence may be sound enough to be a basis for conclusions, but it is not an indicator.
What is the purpose?
- To collect and process data in a form that can be used directly when answering questions.
- To avoid collecting an excessive amount of irrelevant data and to focus the process on the questions asked.
The main evaluation indicators are those related to judgement criteria, that specify the data needed to make a judgement based on those criteria.
An indicator can be constructed specifically for an evaluation (ad hoc indicator) and measured during a survey, for example. It may also be drawn from monitoring databases, a performance assessment framework, or statistical sources.
A qualitative indicator (or descriptor) takes the form of a statement that has to be verified during the data collection (e.g. parents' opinion is that their children have the possibility of attending a primary school class with a qualified and experienced teacher).
A quantitative indicator is based on a counting process (e.g. number of qualified and experienced teachers). The basic indicator directly results from the counting process. It may be used for computing more elaborate indicators (ratios, rates) such as cost per pupil or number of qualified and experienced teachers per 1,000 children of primary-school age.
Indicators may belong to different categories: inputs, outputs, results or impacts.
Evaluation indicators and others
When an evaluation question pertains to an intended result or impact, it is worth checking whether this result or impact has been subject to performance monitoring. In such cases, the evaluation team uses the corresponding indicators and data, which is a considerable help, especially if baseline data have been recorded.
Performance monitoring may, however, be of little or no help in the instance of evaluation questions relating to cross-cutting issues, sustainability factors, unintended effects, evolving needs or problems, coherence, etc.
Quality of an indicator
An indicator measures or qualifies with precision the judgement criterion or variable under observation (construct validity). If necessary, several less precise indicators (proxies) may be used to enhance validity.
It provides straightforward information that is easy to communicate and is understood in the same way by the information supplier and the user.
It is precise, that is, associated with a definition containing no ambiguity.
It is sensitive, that is, it generates data which vary significantly when a change appears in what is being observed.
Performance indicators and targets are often expected to be SMART, i.e. Specific, Measurable, Attainable, Realistic and Timely. The quality of an evaluation indicator is assessed differently.