An analysis is required to convert data into findings, which themselves call for a judgement in order to be converted into conclusions. The analysis is carried out on a question-by-question basis, in the framework of an overall design cutting across all questions.
Data, evidence and findings
Any piece of qualitative or quantitative information that has been collected by the evaluation team is called data, for instance:
- Document X indicates that the number of pupils has grown faster than the number of teachers in poor rural areas (data)
A piece of information is qualified as evidence as soon as the evaluation team assesses it as reliable enough, for instance:
- Document X, quoting Education Ministry data that are considered reliable, indicates that the number of pupils has grown faster than the number of teachers in poor rural areas (evidence)
Findings establish a fact derived from evidence through an analysis, for instance:
- The quality of primary education has decreased in poor rural areas (finding)
Some findings are specific in that they include cause-and-effect statements, for instance:
- The EC has not significantly contributed to preventing the quality of primary education from decreasing in poor rural areas (cause-and-effect finding)
Findings do not include value judgements, which are embedded in conclusions only, as shown below:
- The EC has successfully contributed to boosting the capacity of the educational system to enrol pupils from disadvantaged groups, although this has been at the expense of quality (conclusion).
Strategy of analysis
Four strategies can be considered:
- Change analysis, which compares measured / qualified indicators over time, and/or against targets
- Meta-analysis, which extrapolates upon findings of other evaluations and studies, after having carefully checked their validity and transferability
- Attribution analysis, which compares the observed changes with a "without intervention" scenario, also called counterfactual
- Contribution analysis, which confirms or disconfirms cause-and-effect assumptions on the basis of a chain of reasoning.
The first strategy is the lightest one and may fit virtually all types of questions, for instance:
- To what extent are the EC priorities still in line with the identified challenges?
- To what extent has the support taken into account potential interactions and conflicts with other EC policies?
- To what extent has the EC mainstreamed a given cross-cutting issue in the implementation of its interventions?
The three last strategies are better at answering cause-and-effect questions, for instance:
- To what extent has the EC contributed to achieving effect X?
- To what extent has the EC contributed to achieving effect X sustainably?
- To what extent has the EC contributed to achieving effect X at a reasonable cost?
The choice of the analysis strategy is part of the methodological design. It depends on the extent to which the question raises feasibility problems. It is made explicit in the design table.
Once the strategy has been selected and the data collected, the analysis proceeds through all or part of the following four stages: data processing, exploration, explanation, confirmation.
The first stage of analysis consists in processing information with a view to measuring or qualifying an indicator, or to answering a sub-question. Data are processed through operations such as cross-checking, comparison, clustering, listing, etc.
- Cross-checking is the use of several sources or types of data for establishing a fact. Systematic cross-checking of at least two sources should be the rule, although triangulation (three sources) is preferable. The cross-checking should involve independent documents. A document that quotes another document is not an independent source. An interviewee who has the same profile as another interviewee is not an independent source.
- Comparison proceeds by building tables, graphs, maps and/or rankings. Data can be compared in one or several dimensions such as time, territories, administrative categories, socio-economic categories, beneficiaries and non-beneficiaries, etc. The evaluation team typically measures change by comparing quantitative indicators over time. Comparisons may also be qualitative, e.g. ranking a population's needs as they are perceived by interviewees.
- Clustering proceeds by pooling data in accordance with predefined typologies, e.g. EC support per sector, beneficiaries per level of income.
- Listing proceeds by identifying the various dimensions of something, for instance the various needs of the targeted group as expressed in a participatory meeting, the various effects of a project as perceived by field level stakeholders, the various strengths and weaknesses of the EC as perceived through interviews with other donors' staff.
Provisional findings emerge at this stage of the analysis. Further stages aim to deepen and to strengthen the findings.
The exploratory analysis aims to improve the understanding of all or part of the evaluated area, especially when knowledge is insufficient and expertise is weak, or when surprising evidence does not fit available explanations.
The exploratory analysis delves deeper and more systematically into the collected data in order to discover new plausible explanations such as:
- New categories / typologies
- Unforeseen explanatory factors
- Factors favouring / constraining sustainability
- Unintended effects
- New cause-and-effect assumptions.
The exploratory stage may not be needed for all questions. When such an analysis is carried out, brainstorming techniques are appropriate. The idea is to develop new plausible explanations, not to assert them.
This next stage ensures that a sufficient understanding has been reached in terms of:
- Precisely defined concepts, categories and typologies
- Plausible cause-and-effect explanations
- Identification of key external factors and alternative explanations.
Depending on the context and the question, the explanation builds upon one or several of the following bases:
- Diagram of expected effects
- Expertise of the evaluation team
- Exploratory analysis
A satisfactory explanation (also called explanatory model) is needed for finalising the analysis.
The last stage of the analysis is devoted to confirming the provisional findings through a valid and credible chain of arguments. This is the role of the confirmatory analysis.
To have a finding confirmed, the evaluation undertakes a systematic self-criticism by all possible means, e.g. statistical tests, search for biases in data and analyses, check for contradictions across sources and analyses.
External criticism from experts or stakeholders is also considered.