Navigation path


Evaluation of Labour Market Policies and Programmes: methodology and practice


The Peer Review explored the practice followed by the UK Department for Work and Pensions (DWP) in evaluating their labour market and employment support policies and programmes and the use of the resulting evidence to inform policy development.

The UK Department for Work and Pensions and Jobcentre Plus hosted a Peer Review (PR) in London in September 2011 as part of the Mutual Learning Programme. The event brought together ministry officials and independent experts from twelve countries (Belgium, Cyprus, Czech Republic, Denmark, France, Germany, Iceland, Italy, Netherlands, Norway, Portugal and Turkey), as well as representatives from DG Employment, Social Affairs and Inclusion of the European Commission.

The PR explored the practice followed by the Department for Work and Pensions (DWP) in evaluating their labour market and employment support policies and programmes, and the use of the resulting evidence to inform policy development. The Department is widely regarded as exemplifying good practice in terms of its analytical capabilities and commitment to evidence based policy making at all levels within the Department. This was illustrated by the level of resources committed to evaluation invested in externally commissioned research and its internal teams of analysts, the location of analysts within policy teams ensuring that evaluation is built into policy from the outset, the strong commitment to disseminate all the evaluation work they commission and to ensure independence and their active use of research and evaluation findings to inform policy development and implementation.

The key findings from the discussions at the event are summarised under the following headings:

How can Member States encourage an evaluation culture and ensure that policy makers engage early with evaluation issues and commit sufficient resources to evaluation?

Many different factors contribute to fostering an evaluation culture. These include political commitment to evidence based policy making (accountability); legal requirements for evaluation (e.g. Impact assessment, inbuilt in large scale policy measures); large scale policy changes (e.g. New Deal, Hartz IV reforms); requirements of European funding and training and education of evaluators to embed evaluation throughout the policy cycle.

It was acknowledged that the main supporting factors are a strong political buy-in, having a strong analytical capacity and commitment to disseminate the results and ensuring that findings are delivered within the policy cycle timeframe. In short, evaluation must be seen to be useful and be used.

A number of challenges were also highlighted, such as seeing evaluation as part of a blame culture, the decentralisation of governance structures requiring a higher level of co-ordination and the need to better embedding academic experts by attaching higher value to evaluation research.

Which methodological approaches for evaluation are used and what are the barriers and enablers for their use?

While some countries focus primarily on specific methods, others use a wide variety of methods. The key is that those selected methods are fit for purpose in relation to the policy intent. Some showcased examples of approaches were:

  • Evaluation of pilots (in representative environments) as a way of testing the feasibility of the policy.
  • Experimental projects at ground level (e.g. Inspiration projects in Denmark), which are evaluated applying various forms of register based or quasi-experimental methods. These projects give room for creativity to develop more experimental approaches at ground level. 
  • Randomised Control Trials where the impact of a specific programme is estimated by comparing a group of participants and a control group with respect to the desired outcomes of the programme. This is increasingly considered a gold standard but they also raised ethical and legal concerns and are not easily affordable in some instances.

The existence of different sources of funding for research and evaluation was also a consideration when looking at preserving independence and creativity (e.g. social research councils).

The main barriers identified are related to:

  • Availability of data and ensuring access to the research community;
  • Linking up databases from different organisations;
  • Sharing outcomes/findings gathered at regional level;
  • Ensuring that evaluation is participatory, involving frontline staff to ensure that results translate into delivery practice;
  • Public procurement rules (community benefit clause).

What is best practice in ensuring that evaluation results feed into future policy development and delivery practice?

A series of key success factors were found to underpin the identified best practice:

  • Organisational commitment to evidence based policy development and a culture of evaluation;
  • The provision of an overall framework for evaluation (i.e. Magenta book, annual planning process, training and qualification of staff);
  • Ensuring that the design process is inclusive but involving key stakeholders, depending on the policy area;
  • Level of resource commitment financial and in terms of internal capacity and capability and embedding analysts in policy teams;
  • Availability of rigorous evaluation leading to clear policy messages (quantitative, qualitative and CBA; independence of analysts);
  • Ensuring an effective commissioning process rapid mobilisation through known suppliers;
  • Strong commitment to dissemination and sharing learning- providing evidence that evaluation adds value/acting on finding;
  • Existence of a feedback loop on evaluation that highlights even those cases were findings were not taken into account;
  • Accountability and resource savings, as well as showing the return on investment of public money, evaluation evidence can also lead to savings where interventions are not effective.

How can the European Commission encourage an evaluation culture and promote co-operation?

The European Commission (EC) has an important role to play as facilitator in this process. This could be translated in providing material information such as: consider supplementing Eurostat database of active labour market policy with evaluation results and sharing the progress of the development of WEESP (a webtool of evaluated employment service practice). At political/legislative level, the EC could tighten up evaluation requirements for European funding and actively support the linking up of administrative databases. The EC can also foster the dialogue between the interested parties in a number of ways that include through the use of the European Employment Research Dialogue as permanent vehicle for wider exchange; the ESF evaluation network building on its role to inform greater thematic concentration and improving the evaluation methodologies in ESF; fostering further thematic and methodological follow-up to the Peer Review; bringing together national networks of evaluators; having a more regular focus on evaluation in Peer Reviews and enhancing European comparative research funded by European Commission.

More news