As far as necessary, the commissioning service clarifies in writing precisely what is to be evaluated, and who the main intended users of the evaluation will be.
An evaluation manager is appointed within the commissioning service He/she works under the responsibility of the hierarchy.
Preliminary data collection
The evaluation manager reads the basic documents (fiche, logical framework, review, monitoring report, etc.), and has informal talks with a few key informants.
If no logical framework is available, then the logic of the project has to be reconstructed by the project/programme manager.
Constituting the reference group
The evaluation manager identifies the services and other interested bodies to be involved in the evaluation through a reference group.
Composition and role of the reference group
The reference group involves the project/programme management and the relevant EC services. If the evaluation is conducted in the partner country, then the group may involve development partners, experts, non state actors, and other qualified participants. Membership should remain manageable (no more than 10). The group is chaired by the evaluation manager.
The reference group discusses and comments on all intermediary documents, generally at their draft stage: terms of reference, evaluation team's proposal, evaluation questions, work plan and debriefing of the field phase, and final report. It generally has an advisory role but may be required to approve the evaluation questions.
A reference group has substantial advantages in terms of access to information, accuracy of interpretations, and ownership of conclusions.
Meetings are moderated with the aim to prevent a waste of time and to deal with conflicting views in a constructive manner. Timely circulation of working documents and minutes is essential.
A note is sent to the services and bodies invited to join, explaining the role played by the reference group.
Preparing the terms of reference
The main issues to be studied are identified by the evaluation manager. If a good logical framework is available and still valid, the evaluation manager may refine the issues to be studied into evaluation questions.
At some stage in the evaluation process, a series of precise questions (no more than ten) is selected with a view to satisfying the needs of the evaluation users and to ensuring the feasibility of the evaluation. This site presents a set of typical questions .
By focusing the evaluation on key issues, the questions allow the evaluation team to collect accurate data, to deepen its analyses, to make its assessments in a fully transparent manner, and finally to produce a useful report.
Questions are written in a simple and precise way. As far as possible, they do not cover areas where other studies are available or in progress.
The set of questions is composed in such a way that the synthesis of all answers will enable the evaluation team to formulate an overall assessment of the project/programme. For this purpose, the set of questions covers the various levels of the logical framework and the seven evaluation criteria in a balanced way.
The profile of the external evaluation team to be engaged is specified as regards professional competence, sector expertise, and field work capacity.
A ceiling is set for the overall evaluation budget and the availability of resources is secured.
The timetable is specified in line with institutional requirements if necessary. Alternatively, the deadline for delivering the report is fixed with a view to the needs of the intended users.
The evaluation manager writes a first version of the terms of reference (ToRs), possibly building upon the template attached to these guidelines.
The reference group members are consulted on the draft version. The evaluation manager finalises the document and proceeds to the engagement of the external evaluation team, via the applicable tendering/contracting procedure.
Specific guidance in the case of a participatory evaluation
Template for the terms of reference
The evaluation manager receives the technical and financial proposal(s) prepared by the candidates. He/she checks that the proposal(s) covers:
- Understanding of terms of reference.
- Indicative methodological design.
- Detailed price.
- Planned schedule.
- Team members' responsibilities, CVs, and signed statements of absence of conflict of interest.
The evaluation manager takes part in the analysis of proposal(s). Reference group members are consulted, especially with a view to identifying and preventing conflicts of interest and risks related to independence.
Externality and independence
External evaluations are carried out by entities and persons free of the control of those responsible for the design and implementation of the project/programme.
Independence implies freedom from political influence and organisational pressure, full access to information and full autonomy in carrying out investigations and writing conclusions.
Externality and independence are meant to achieve credibility in the eyes of outside audiences, something which is particularly relevant if the evaluation is undertaken for purposes of accountability, for learning transferable lessons or for reallocating budgetary resources. Such evaluations are called "summative", as opposed to "formative" evaluations which are conducted for the benefit of those managing the project/programme, with the focus on improving their work and preferably with their full participation. Externality and independence may be of lesser importance in formative evaluations.
The evaluation manager assesses the quality of the proposal(s) and verifies that the human and financial resources offered are suitable for the particular difficulties identified while preparing the terms of reference.
Check list for assessing the quality of a proposal
- knowledge and working experience in the field of evaluation
- (if relevant) demonstrated ability to carry out participatory approach
- technical and sectoral knowledge and expertise
- capacity to address essential cross-cutting thematic issues (e.g. gender equality, environment)
- experience in development cooperation, and EC cooperation in particular
- experience in the partner region, similar countries and/or the partner country
- adequate language skills
- understanding of the ToR
- understanding of the context
- proposed individuals have the time to successfully complete their task as planned in the schedule
- clear sharing of responsibilities and adequate leadership skills for effective team management and successful relations with partners and stakeholders
- commitment to strengthen evaluation capacity in the partner country
The evaluation manager engages the external evaluation team in the framework of the applicable tendering/contracting procedure. This is formalised in a contractual document, the date of which marks the beginning of the evaluation team's work.
The inception stage starts as soon as the evaluation team is engaged, and its duration is limited to a few weeks.
Within a few weeks after the start of the work, and after a review of basic documents complemented by a few interviews, the evaluation team defines its overall approach.
This approach is presented in a meeting with the evaluation manager and the reference group members. Subjects to be discussed include:
- Logical framework.
- Evaluation questions, either from the ToR or proposed by the evaluation team.
- Provisional methodological design.
- Access to informants and to documents, and foreseeable difficulties.
The presentation is supported by a series of slides and by a commented list of evaluation questions. Where relevant, the meeting may be completed by an email consultation.
Specific guidance in the case of:
The evaluation manager receives an inception report which finalises the questions and describes the main lines of the methodological design, including the indicators to be used, the analysis strategy, and a detailed work plan for the next step.
The report is formally approved by an official letter authorising the continuation of the work.
If the set of evaluation questions is drawn up at this stage, it becomes part of the ToR.
FINALISATION OF FIRST PHASE (DESK)
The evaluation manager facilitates the retrieval of any relevant document and the access to key informants in the EC and partner Government(s).
He/she receives the first phase report (desk) which recalls the steps already taken and adds the following elements:
- Progress of the documentary analysis and limitations if there are any.
- Definition of any unclear term.
- Partial answers to the evaluation questions on the basis of documents.
- Issues still to be studied and assumptions to be tested during the field phase.
- Final methodological design including evaluation tools ready to be applied.
- Work plan for the field phase.
The evaluation manager submits the draft report to the reference group members for consultation. If appropriate, he/she convenes and chairs a meeting where the report is presented and discussed. Comments are taken into account by the evaluation team in a final version of the report. The evaluation manager approves the report and authorises the launching of the field phase.
Approval of reports
The members of the reference group comment on the draft version of the report. All comments are collected by the evaluation manager and forwarded to the evaluation team. The team prepares a new version of the report, taking the comments into account in two distinct ways:
- Requests for improving methodological quality are satisfied, unless there is a demonstrated impossibility, in which case full justification is provided by the evaluation team.
- Comments on the content of the report are either accepted or rejected. In the latter instance, dissenting views are mentioned in the report.
The manager verifies that all comments have been handled properly and then approves the report.
The evaluation manager checks that:
- Public authorities in the partner country/countries are informed of the future field work through the appropriate channel.
- Project/programme management are provided with an indicative list of people to be interviewed, dates of visits, itinerary, and names of team members.
- Logistics are agreed upon in advance.
The work plan is kept flexible enough to accommodate for circumstances in the field.
Specific guidance in the case of:
The evaluation manager facilitates interviews and surveys by any appropriate means, such as mandate letters or informal contacts within the Government.
The manager is prepared to interact swiftly at the evaluation team's request if a problem is encountered in the field and cannot be solved with the help of the project / programme manager.
Specific guidance in the case of a multi-country programme
One or several debriefing meetings are held in order to assess the reliability and coverage of data collection, and to discuss significant findings. At least one of these meetings is organised with the reference group.
The evaluation team presents a series of slides related to the coverage and reliability of collected data, and to its first analyses and findings. The meeting(s) are an opportunity to strengthen the evidence base of the evaluation. No report is submitted in advance and no minutes are provided afterwards.
Specific guidance in the case of:
The evaluation manager receives the first version of the final report. The document should have the same format, contents and quality as the final version.
The evaluation manager assesses the quality of the report on the basis of an eight-criteria assessment grid. The assessment is double-checked by a second person.
Quality assessment criteria
The following eight criteria are derived from international evaluation standards and are compatible with them:
1. Meeting needs
- Does the report describe precisely what is evaluated, including the intervention logic and its evolution? Does it cover the appropriate period of time, target groups and areas? Does it respond to all ToR requests?
2. Appropriate design
- Is the evaluation design described in enough detail? Is it adapted to the project/programme? Are there well-defined and appropriate indicators? Does the report point out the limitations, risks and potential biases associated with the evaluation method?
3. Reliable data
- Is the data collection approach clearly explained and consistent with the overall evaluation design? Are the sources of information clearly identified in the report and cross-checked? Are the data collection tools (samples, focus groups, etc.) applied in accordance to standards? Have data collection limitations and biases been explained and discussed?
4. Sound analysis
- Is the analysis based on the collected data and focused on the most relevant cause/effect assumptions? Is the context adequately taken into account? Have stakeholders' inputs been used in a balanced way? Are the limitations identified, discussed and presented in the report?
5. Credible findings
- Are the findings derived from the data and analyses? Are interpretations and extrapolations justified and supported by sound arguments? Is the generalisability of findings discussed?
6. Valid conclusions
- Are the conclusions coherent, logically linked to the findings, and free of personal or partisan considerations? Do they cover the five DAC criteria?
7. Useful recommendations
- Are recommendations consistent with the conclusions? Are they operational, realistic and sufficiently explicit to provide guidance for taking action? Are they clustered, prioritised and devised for the different stakeholders?
8. Clear report
- o Is there a relevant and concise executive summary? Is the report well structured, adapted to its various audiences, and not more technical than necessary? Is there a list of acronyms?
The quality assessment should enhance the credibility of the evaluation without undermining its independence. It therefore focuses on the way conclusions are substantiated and explained and not on their content. The quality assessment must not be handled by those who are involved in the evaluated project/programme.
The evaluation manager and the evaluation team leader discuss the quality assessment. Improvements are requested if necessary.
The evaluation manager submits the draft report to the reference group members for consultation. If appropriate, he/she convenes and chairs a meeting where the report is presented and discussed. Special attention is paid to the utility of conclusions and the feasibility of recommendations.
At this stage, the evaluation manager may also convene a seminar with a view to discussing conclusions and recommendations in a wider arena. Attendance may include the Delegation staff, national authorities, other development partners, non-state actors, project management, and/or experts.
Comments are taken into account by the evaluation team in a new version of the report. The evaluation manager also receives an electronic version of the slides presented by the evaluation team.
He/she checks that the comments received have been taken into account in an appropriate way, and that the report is ready for dissemination, including the full set of annexes.
He/she runs a final quality assessment against the eight criteria of the assessment grid, writes qualitative comments for all criteria, and decides upon the overall quality score.
The evaluation manager sends the final version of the report and quality assessment to the reference group members, and thanks them for their contribution.
DISSEMINATION AND FOLLOW UP
Informing the hierarchy
The evaluation manager sends the report to the hierarchy with a special short summary (1 to 2 pages maximum) pointing out the most relevant conclusions, lessons and recommendations, with a covering note indicating when the dissemination will proceed and how.
Dissemination of report
Two weeks later, or more, if requested by the hierarchy, the manager publishes the report, the executive summary and the quality assessment grid on at least one publicly accessible website. Links are posted on other relevant websites.
The evaluation manager circulates the full length report to the relevant Commission services and other evaluation users.
Decision-makers and designers use the evaluation to reform or renew the project/programme, to confirm or to change strategic orientations, or to (re)allocate resources (financial, human and other). They are interested in clear, simple, and operational recommendations that are credibly grounded on evidence.
Managers and operators use the evaluation findings to adjust management, coordination and/or their interactions with beneficiaries and the targeted groups. They expect detailed information and are ready to interpret complex technical messages.
The institutions that funded the project/programme expect to receive accounts, i.e. a conclusive overall assessment of the project/programme.
Public authorities conducting related or similar projects/programmes may use the evaluation through a transfer of lessons learned. The same applies to the expert networks in the concerned sector.
Finally, the evaluation may be used by civil society actors, especially those representing the interests of the targeted groups.
The evaluation manager may organise one or several presentations targeted at audiences like: expert networks in the country or region, media, government-donor coordination bodies, non-state actors.
He/she may ask the evaluation team to participate in the presentation.
He/she may write an article to facilitate the dissemination of the main conclusions and recommendations.
As soon as the evaluation report is circulated, the evaluation manager draws up a list of recommendations, each associated with the respective addressees.
He/she invites the addressees to comment on the recommendations in writing. A similar invitation is sent out one year after the end of the evaluation. The evaluation manager collects all comments received. The options may be just to file them or preferably to publish them on the same website as the report.
This stage concludes the evaluation.