Sounds good… but what will it cost? Making the case for rigorous costing in impact evaluation research
This post was co-authored by Liz Brown (Costing Analyst, CEGA) and Marie Gaarder (Director of the Evaluation Office and Global Director for Innovation and Country Engagement, 3ie) with contributions from CEGA Executive Director Carson Christiano. It is cross-posted on 3ie’s Evidence Matters blog.
Imagine two government programs — a job training program and a job matching program — that perform equally well in terms of boosting employment outcomes. Now think about which is more cost-effective. If your answer is “no idea” you’re not alone! Most of the time, we don’t have the cost evidence available to discern this important difference.
Cost evidence* is essential for deciding how scarce resources can best be used to alleviate poverty. Access to combined cost and impact evidence allows decision-makers to assess whether a program or intervention is worth its cost; a calculation that is practically impossible with impact evidence alone. It further allows us to compare benefits and costs across alternative development programs or consider the worth of scaling or replicating a successful program. Those familiar with research processes will also understand that it is far more efficient to plan for cost-effectiveness or cost-benefit analysis (CEA/CBA) early and collect cost data throughout the implementation phases of evaluation, rather than return to costing after a program has closed.
Still, researchers rarely integrate CEA/CBA into impact evaluations of development interventions.
How bad is the current situation?
A recently published World Bank study, Integrating Value for Money and Impact Evaluations (Brown and Tanner, 2019), suggests that fewer than one in five development impact evaluations attempt to estimate the “cost per impact” or the relative worth of interventions with comparable outcomes. Of those that do, many are unreliable due to poor data quality, or to a lack of transparency in the reporting of methods and assumptions. This frequently leads to a one-sided presentation of impact evidence and decreases the policy relevance and influence of these studies.
Yet decision-makers want cost information. Experimental evidence from a survey of policymakers included in the World Bank study shows that policymakers’ willingness to pay for sound cost information is no less than their willingness to pay for causal information on benefits. Moreover, policymakers value the combination of cost and benefit information more than they value either cost or benefit information on its own.
You may find yourself wondering, as we do: if integrating value for money analysis into impact evaluations is such a great idea, why is it so infrequently and poorly done?
The World Bank used mixed-methods to investigate the barriers and challenges preventing greater integration of CEA/CBA in impact evaluations. Low production is due partially to low levels of researcher training and a lack of codification in the assumptions necessary for CEA/CBA (e.g., time horizons, discount rates, and economic or financial cost accounting). There is tepid demand from academic journal editors to publish CEA/CBA in IE results partly because ill-defined standards of rigor undermine editors’ and reviewers’ capacity to confidently evaluate the quality of CEA/CBA analysis when it is integrated with impact evaluation evidence.
At the same time, institutional funders of IEs do not consistently demand that cost analysis be integrated into their funded evaluations. They face similar challenges to journal editors when evaluating the quality of CEA/CBA evidence. There is a mis-guided perception among researchers that policymaker interest in CEA/CBA is low, a perception confounded by the practical constraints related to poor data quality and the view that policymakers are unlikely to choose projects based on efficiency analysis alone since political factors tend to dominate project selection. These factors collectively undermine researcher’s incentives to integrate the cost side.
The Center for Effective Global Action (CEGA) and 3ie are working together to address these challenges, with the ultimate goal of increasing the number of IEs that integrate CEA/CBA methods. First, we plan to study and improve “the market” for producing and using cost evidence. Then, working closely with a number of partner organizations, we will work to establish “best practice” methods and tools for producing cost evidence and identify a model for long-term cost research sustainability. We elaborate on these activities below, and invite feedback from those involved in related or complementary efforts.
1. Study and improve “the market” for producing and using cost evidence
We don’t yet have a clear sense of how cost evidence is — or could be — used by the budget-constrained decision-makers who ask for cost evidence in conjunction with impact evidence. Many of the researchers and evaluators we have interviewed are concerned that CEA/CBA evidence generated during an impact evaluation would have low relevance to the precise decision points of significance to frontline, developing country decision-makers.
CEGA and 3ie are eager to leverage existing policy connections in low-and middle-income countries — for example through CEGA’s EASST network and affiliate research being carried out in collaboration with government ministries — to better understand what type of cost evidence is most useful to decision-makers, in what format, and on what time frame. This is an important first step in reducing uncertainty about what decision-makers need and want to know and when they need to know it.
2. Establish “best practice” methods and tools
The standards of rigor for integrating CEA/CBA analysis into academic impact evaluation studies in development economics are not well-defined. The credibility of CEA/CBA is often undermined by opaque reporting of methods, data sources, prices, and discount rates, making it difficult for consumers of that evidence to establish comparability and to accurately interpret the results of cost-effectiveness analyses performed in different settings and countries.
Our current set of best practice methods and tools for conducting CEA/CBA analysis have not yet been harmonized. As a wider set of practitioners applies existing tools in different sectors and contexts, there are more opportunities to learn from the experiences of others and identify technical gaps. For example, further attention is needed to develop useful tools and guidance for reporting uncertainty in effect and expenditure estimates that meet academic research standards and increase the relevance of IEs to decision-makers. Reproducibility of CEA/CBA analysis is also worthy of further attention to define common standards and expectations.
Organizations like CEGA, 3ie, J-PAL, IPA, IRC, Evidence Action, SIEF and others that fund, produce, quality assure and synthesize rigorous evidence on development effectiveness are well-positioned to organize the market of CEA/CBA knowledge. This needs to be accomplished in coordination with academic researchers and journal editors to ensure no compromise in the production and dissemination of rigorous CEA/CBA evidence.
3. Build a model for cost research sustainability
The dedicated expertise needed to adapt CEA/CBA methods to the complexities of unique academic experiments is often unaffordable, unavailable, or under-trained. Mechanisms to centralize and share resources; render workflows more transparent; and explore ways of transparently communicating cost parameters, assumptions, and sensitivities in conjunction with cost analysis (e.g. identify the attributes of exemplar costing studies); and identify a sustainable business model for the conduct of CEA/CBA analysis is needed to lower the cost of generating CEA/CBA analysis while increasing its production.
A few, relatively small, time-limited investments in the development of academic-grade public goods and training tools may go a long way to make producing rigorous cost analyses less costly for research teams.
Momentum is building
The time has come to collectively understand, address, and remove the barriers impeding cost data collection and analysis and work to align the incentives barring more widespread production of this important input to policy decision-making.
Inspired by the success of the Berkeley Initiative for Transparency in the Social Sciences (BITSS), which has helped to boost the adoption of transparent and reproducible research across the social sciences, CEGA is now incubating a new cost transparency initiative (CTI). The goal is to dramatically increase production of rigorous, high-quality cost evidence produced by the development research community.
We are aware of and applaud the work of more than 10 other distinctive efforts to collect cost data in more rigorous and systematic ways, or to produce more standardized guidance for costing in development economics. These organizations include J-PAL, IPA, 3ie, USAID, Evidence Action, Development Innovation Ventures (DIV), IRC, the World Bank, and DFID among others.
We are pleased to see others taking up this cause, as evidenced by a recent blog by SIEF and in an earlier blog by IEG. Our SIEF and IRC colleagues recently reframed the absence of cost data collection as a “first mile problem” and jointly produced the very practical Cost Capture Guidance for collecting rigorous cost data. This important work recognizes and builds on J-PAL’s Costing Template and Guidance. But more is needed.
CEGA and 3ie are preparing to lead a collective and coordinated effort to shift norms and expectations concerning what constitutes rigorous evidence. Understanding the market relevance of CEA/CBA evidence in conjunction with our partners; improving the quality and transparency of CEA/CBA analyses; and lowering their production costs has the potential to increase the number of impact evaluations that integrate the cost evidence policymakers want to see. We do this to amplify the policy relevance of evidence for those responsible to select poverty alleviation programs offering the biggest bang for the buck.
Are you with us?
* “Cost evidence” refers to cost data and analysis methods used to analyze an evaluated program’s economic efficiency. In practice, impact evaluations most often integrate cost-benefit or cost-effectiveness analysis Brown & Tanner (2019). Other types of cost evidence such as program scale and replication models to assist in policy decision-making also require high quality cost data and impact evidence to be reliable.