IMPORTANT LEGAL NOTICE - The information on this site is subject todisclaimercopyright notice
Evaluation
  EUROPA > European Commission > EuropeAid > Evaluation > Methodology
Last updated: 11/11/2005
EuropeAid
EuropeAid
 

Synthesis of comments by Expert Panel on the evaluation manual of EuropeAid



Methodology
• Evaluation Guidelines
• Methodological bases
• Evaluation tools
• Examples
• Glossary
• Sitemap

 
 

1. Background

These notes were made after the draft manual for evaluation was presented to us in the expert panel in early September 2005. The comments refer to the material sent to the panel members in August 2005. The comments refer only to the draft manual, and they do not take into account other formulations of evaluation policy or presentation of the practices of European development cooperation. The panel met on the 16th of September, and before that each of us had spent some days reviewing the draft manual. Our meeting on the 16th lasted 2 hours, and following that we have had e-mail exchanges over these notes.



2. Overall impression of the manual

The draft manual is a very comprehensive set of documents and it sets out the evaluation process in great detail. It reflects a committed effort to clearly present and structure the evaluation process and to produce guidance that responds to the needs of people in the organisation.

The manual could also be very useful for other stakeholders who in various ways are affected by the evaluation processes of EuropeAid, for example consultants or representatives from authorities in partner countries.

The manual is well balanced. The major components parts are of equal weight, and no aspect of the evaluation process gets more attention than its due part of the process merits. Most of the sections that constitute the manual are of a similar length, of around 1 and 2 pages. The manual often treats complex concepts and processes with admirable brevity. At the bottom end of the hierarchical structures the texts are sometimes longer, even up to a dozen pages1. However, this is to be expected as the longer texts are at the end of a line of quest and this poses no problem.

The manual addresses different readers and it can be approached differently depending on whom you are and what your interests are. This is useful and it makes the manual as whole more flexible and adaptive.

Most of the content of the manual reflects an emerging consensus in the evaluation community on how evaluation should be undertaken. It is up-to-date in 2005 and should be able to serve EuropeAid for many years to come.

Our comments should be seen in the light of an overall impressive product which reflects a comprehensive grasp of evaluation as a phenomenon and substantive skills in encoding this knowledge and communicating it to others through the manual and in designing a web-based source of organisational learning.



3. Issues of policy

We have some comments that relate to the overall evaluation policy as it is reflected in what is written – or not written – in the manual. Whether the manual ought be amended to take account of our observations depends on the position that EuropeAid takes on the substance of these issues.



3.1. Practical guidance and advice on joint evaluation

It is often recommended that funding agencies should join forces in evaluation. There are many reasons for that, not least that the evaluation process becomes less of a burden for developing countries, whose project personnel and other officials otherwise have to spend too much time with external evaluation missions. Another reason is that projects and programmes are frequently financed and/or delivered jointly, and therefore it makes sense to evaluate them jointly. It has been strongly recommended by the OECD/DAC Working Group on Evaluation that coordination between member states should be increased.

Development cooperation is changing and though budget support, sector assistance and programme approaches have supplemented traditional project assistance for the past two to three decades, the so called newer forms are continuously becoming more prevalent. It is also the broader approaches to cooperation that are most suitable for cooperation between funding agencies and where this also would have the highest effect in partner countries.

However, cooperation between funding agencies in evaluation would require some compromises. As all the agencies concerned have their own ways of codifying and regulating the evaluation process, joint evaluation would often need a significant element of mutual adjustment. The manual does not contain any advice or guidance on how evaluation managers should approach joint evaluation. In the future, it is likely that evaluation managers will increasingly find that they manage evaluations together with colleagues from other agencies, and hence they must adapt. This is a difficult task, and the question is whether there are some aspects of the process that are less “negotiable” than others, and if so how they could collaborate with others.

In our opinion, the evaluation manual could benefit from an additional section on joint evaluation, and furthermore we would suggest that this is described not as a problem and a further complication, but as an opportunity to strengthen the evaluation process, achieve economies of scale, and that will lead to more effective collaboration for the host country.



3.2. Approach to participatory evaluation

Yet another significant development in the field of evaluation research lies in participatory evaluation. The concept arose through research on the utility and use of evaluation findings. In cases where the primary function of evaluation is formative, it has been found particularly useful to involve various stakeholders/participants in the process of defining questions, developing methods, collecting data and drawing conclusions. When there is a stronger sense of ownership of the intellectual outcome of the process, it is more likely that stakeholders will act on the recommendations. Stakeholders in this sense refers organisations that are engaged in development cooperation; such as NGOs, consulting firms, public authorities at central and decentralised levels, etc.

When other actors are engaged in the process, it would again often be necessary to adjust the process. The reference group that is defined in the manual may, for example, have other roles and functions, and there might be supplementary bodies. The evaluation team may also have other roles and may need other competencies, perhaps more in the nature of facilitators to the process rather than as operators. As in respect of joint evaluation, we think that the manual may contain words of advice on participatory evaluation strategies, encouraging evaluation managers to experiment with participatory approaches and advising them how to adjust and adapt the process to also accommodate other actors.



3.3. Ethical standards

The evaluation community has devoted much attention to the question of what constitutes evaluation quality. The Programme Evaluation Standards of 19942 is one of the path-breaking and most influential publications on the subject. It discusses quality in four dimensions; utility, feasibility, accuracy and propriety. The manual that we have reviewed deals thoroughly with aspects of the evaluation process that contribute to feasible, accurate and useful evaluation, but it does not contain much material on evaluation ethics.

There are good reasons to believe that ethical subjects will – in the long run – prove particularly difficult in evaluation of development cooperation. The assymetrical power relations, the prevalence of donor- recipient modalities of thinking and acting, the often perverse incentive systems around aid, and the cross-cultural differences contribute to make aid evaluation difficult and subject to intricate ethical choices. The manual could be supplemented with a section on what ethical standards mean and why they are so important in aid. It could also outline some of the ethical pitfalls that evaluation managers and evaluation teams are likely to face – and fall into.



3.4. Integration with related systems

As far as we have understood it, EuropeAid as an organisation applies a system of Results Based Management (RBM). It is not quite clear how this system as a whole, and the decision-making in this system, will relate to the evaluation system. There is ambiguity as to whether these materials are created to work within or outside of a focus on and commitment to the perspective of results-based management. There are places that hint of an acceptance of RBM as the conceptual framework for this material and then there are multiple places where the content suggests it is not—especially in the sections on indicators and targets. This needs attention and an explicit statement in the beginning of the materials as to the relation of what is here to the RBM perspective.

We would also think there is a need for further discussion on how the materials presented here link in explicit ways to monitoring and also to auditing. There is no discussion of the linkages that would be necessary to create a monitoring and evaluation (M&E) system in a country, for programs, or even for projects. In fact, the materials go out of their way to make distinctions between evaluation and both monitoring and auditing, but nothing is made of the similarities and linkages that could come from greater coordination among these approaches. In the past, EU evaluations have often been constrained by the limited usefulness of monitoring data. It would be a pity if this opportunity were not taken better to co-ordinate these streams of activity.



4. Concluding remarks

It has been useful for us to go back and attempt an overview of the website materials after more than 2 years of effort. At the core of the task set for the expert group lies an assessment of the ‘quality’ of these materials. ‘Quality’, however, is a concept notoriously open to both broad and narrow interpretations. There could be:

  1. a narrow definition of scientific quality, and
  2. a broad definition of quality as the ultimate usefulness of the material to the range of its intended users

With respect to 1 (scientific quality/technical adequacy) EuropeAid has done well. There are some criticisms to be made but, in general, both the comprehensiveness of these texts and their technical accuracy/awareness are sound.

With respect to 2 (usefulness to stakeholders), however, we suggest that there is still some way to go. ‘Motivation’ is almost as important as ‘credibility’ or ‘dissemination’. But we have seen little so far in this website to motivate evaluation managers. The material equips them to do a job, but does little to convince them that the job is valuable, creative, interesting and generally worth doing. As we understand it, many of the evaluation managers in the delegations will have plenty of other responsibilities, so their evaluation business is in constant danger of becoming just an additional chore – to be minimized. Whilst no website can change that by itself, it can at least show some sympathy for the problem.



5. Experts

Kim Forss, Andante, Stockholm

Kim Forss works for Andante - tools for thinking AB. His work is dedicated to research and consultancy in the field of evaluation research, policy research on international relations and organisational development. He is the author of communications dealing with quality and use of evaluations results in a perspective of improved democratic management.


John Mayne, Ottawa

John Mayne is an independent advisor on public sector performance. Previously, he was with the Office of the Auditor General of Canada and the Treasury Board Secretariat of Canada. He has worked extensively on evaluation quality assessment. He has also edited several books in the areas of program evaluation, including an international book on evaluation and performance measurement.


Christopher Pollitt, Rotterdam Univerity

Christopher Pollitt has a long record in research and teaching in evaluation and public management, first in Brunel University (UK) and currently in Rotterdam University (NL). He has been part of many OECD research networks in these areas. He has been the chairman of the European Evaluation Society. He has produced a critical analysis of guidelines.


Ray Rist. World Bank - OED, Washington

Ray Rist currently works as a methodological adviser for OED (World Bank). He has worked previously in the US General Accounting Office. He has an extensive knowledge of evaluation practices worldwide.


Pierre Spitz, INRA, Paris

Pierre Spitz works for the Institut National de la Recherche Agronomique (INRA) in Paris. His area of expertise concerns rural development. He used to work for UN agencies for the last decade.



1 Sections on evaluation tools can go up to 40 pages.
2 The Joint Committee on Standards for Educational Evaluation. The Program Evaluation Standards. Thousand Oaks, CA: Sage Publications, Inc. 1994

Top