What is the value of evaluation? New DFID discussion paper
“No-one will argue that using evidence is ever bad” (What Works Review of the Use of Evidence in DFID, 2014)
‘Evidence’ is often seen as axiomatically ‘a good thing’. This means that the value of evaluation and other research products can become self-evident, and the justification for evaluating unquestioning. As with other investments decision makers make, however, investment in evaluation requires a rationale and the spend needs to be proportionate to the expected value generated. The difficulty arises in deciding how much to spend to generate evidence; i.e. what is the value of evidence, and is this a good buy?
DFID has been at the forefront of supporting the generation of evidence to meet the increasing demand for knowledge about what works in international development. Monitoring and evaluation have become established tools for donor agencies and other stakeholders to demonstrate accountability and to learn. At the same time, the need to demonstrate the impact and value of evaluation activities has also increased. However, there is currently no systematic approach to valuing the benefits of an evaluation, whether at the individual or at the portfolio level.
In international development, where evidence-informed policymaking remains a core paradigm, a number of organisations have grappled with these issues over the past few years. For example, the World Bank’s @CarolineHeider published several blogs on the Value for Money of evaluations, WB DIME’s @alegovini, @vincenzodimaro and @piza_caio worked on a study analysing the effect of impact evaluations on the rate of disbursements for their projects, and @3ieNews together with @IDinsight published a paper looking at how to maximise the social impact of impact evaluations. Last year, DFID organised a conference on the topic in London (see Twitter hashtag #dfidevaluation). Following up on the conference, Julian Barr, Rob Lloyd, Anna Henttinen, Danielle Dunne and I have worked on a discussion paper on the value of evaluation, which is now available online on R4D (click here).
The paper discusses two key questions for practitioners around the debate:
1. What different methods and approaches can be used to estimate the value of evaluations before commissioning decisions are taken, and what tools and approaches are available to assess the value of an already concluded evaluation?
2. How can these approaches be simplified and merged into a practical framework that can be applied and further developed by evaluation commissioners to make evidence-based decisions about whether and how to evaluate before commissioning and contracting?
To answer these questions, the paper first reviews different valuation techniques from a range of academic disciplines and looks at their challenges and potential usefulness for the valuation of evaluations. Most of the valuation techniques are relatively time consuming to use, require a specific set of skills to apply, and can only be applied in a context of abundance of high quality data. While some of the techniques can generate detailed and specific estimations of value, they often do so at the expense of wider utility. This paper finds that most ex-ante techniques may be too time-consuming for evaluation commissioners, including DFID, to use routinely. More complicated and time-consuming valuation techniques may be justified where the benefits are likely to be large, for example where the information generated by an evaluation has the potential to be massively scaled-up and used across countries and/or agencies.
In contrast, some of the ex-post techniques are suitable for further adaptation and use. The paper also includes two case studies quantifying the benefits of evaluations ex-post, and outlining some of the key challenges. Drawing on this analysis, this paper presents a framework that can be further developed by evaluation commissioners into an ex-ante tool to articulate and estimate the potential benefit of evaluations that they plan to commission.
The paper argues that the value proposition of evaluations is context-specific, but that it is closely linked to the use of the evaluation and the benefits conferred to stakeholders by the use of the evidence that the evaluation provides. Although it may not always be possible to quantify and monetise this value, it should always be possible to identify and articulate it.
In the simplest terms, the cost of an evaluation should be proportionate to the value that an evaluation is expected to generate. This means that it is important to be clear about the rationale, purpose and intended use of an evaluation before investing in one. To provide accountability for evaluation activity, decision makers are also interested to know whether an evaluation was ‘worth it’ after it has been completed.
I wonder what others are thinking about the paper, and about the debate, including evaluators and evaluation commissioners, but also other development professionals? Interested in thoughts… Please comment on/share this blog and/or get in touch.