How Donors can Support the Generation of Cost Evidence in Impact Evaluation Research

The Center for Effective Global Action
CEGA
Published in
7 min readDec 9, 2020

This post was written by CEGA Staff Scientist Liz Brown, with contributions from Executive Director Carson Christiano. It summarizes insights from Cost-ober — a series of virtual panels in fall 2020 exploring the demand for high quality cost evidence among journal editors, donors, policymakers, and development engineers. CEGA acknowledges Program Manager Sam Fishman for his efforts to organize these panels.

Photo: Women attend a BRAC micro finance group meeting in Simgisi village of Arumeru district in Arusha region. (Credit: BRAC/Shehzad Noorani)

Believe it or not, policymakers can spend the same amount of money while saving more lives by using cost-effectiveness analysis to guide their program allocation decisions.

Take, for example, the recent experience of GiveWell in Uganda. Careful analysis of the expected benefits and costs of an anti-malaria program showed that prioritizing those districts where resources could go the farthest could save twice as many lives in comparison to alternative strategies. Ultimately, twice as many people were able to access the life-saving program than if the team had used any other method of prioritization. Put simply, more and better cost analysis means more “bang for the buck.”

Together with colleagues at 3ie and other leading research institutions, CEGA has identified an urgent need for credible, consistent, and comparable evidence of program costs in global development. This evidence, taken together with evidence of program impacts, can help researchers reliably calculate program cost-effectiveness and model program costs at scale — both critical inputs to the policy decision-making process.

Unfortunately, chronic inattention to cost evidence within the global development research community has led us to a world in which: the tools and methods used in cost analysis lag behind the econometric methods used in impact analysis; we have few reliable estimates of cost despite growth in the evidence on program impacts; we have insufficient quality and quantity of cost evidence to support the modeling of program scale-up decisions; and few researchers can say with confidence that they know how to plan and carry out a rigorous costing study.

Thankfully, donor organizations like USAID are stepping up to help drive more and better costing of development interventions. In November 2020, USAID issued new guidance stating that “all impact evaluations [funded by USAID] must include a cost analysis of the intervention or the interventions being studied” (201.3.6.4). Recently, the Center for Global Development argued that establishing USAID as a leader in evidence-based aid would require more cost-effectiveness analysis from the Agency and greater coordination.

This fall, CEGA and members of our Costing Community of Practice hosted “Cost-ober” — a series of virtual panels exploring the demand for high quality cost evidence among journal editors, donors, policymakers, and development engineers. Panelists expressed overwhelming support for cost analysis and shared specific steps that their institutions are taking — or considering taking— to support this important work.

In this post, we provide a framework for donors (and other stakeholders) to consider when deciding how best to support costing in development research. The activities we discuss can be categorized along the following dimensions (also illustrated in Figure 1, below):

  • Targeted vs systematic: Some donors may wish to understand the costs of a specific program or policy (targeted costing activities). Others may wish to support the production of cost evidence on a larger scale, and in a more systematic way, to inform programming decisions across a broader portfolio (systematic costing activities).
  • Private vs public goods: Donors may wish to invest in the development of costing tools for internal use (private good). However, if they see the value in building a rigorous and transparent cost evidence base, they may wish to invest in the development of methods and generalizable tools for wider application in the field (public good).

Figure 1: Coordinating investments in the development of the cost evidence base

This figure illustrates a set of activities that donors can support that can help drive more and better cost evidence for the global development community. Activities in green represent those that generate cost evidence primarily for internal or private use (Strategies 1 and 2, below). Activities in orange represent those that disseminate targeted cost evidence to the broader research and policy community (Strategy 3). Activities in blue represent those that help drive the generation of rigorous cost evidence at scale (Strategy 4).

Donors may identify with one or more of the following strategies for supporting the generation of cost evidence in development research:

Strategy 1: Support targeted cost evidence for private use.

Perhaps the easiest way to invest in the generation of cost evidence is to support it at the project level, for direct use by donors and/or policymakers in decision-making. The donors who joined our second Cost-ober panel, “How Do Donors Think About Cost Evidence?”, noted that it’s particularly valuable to support cost data collection and analysis for interventions with a high potential to scale. Analysis of cost drivers, cost-per-impact estimates, and the modeling of alternative scale modalities can lead to cost-savings in implementation through targeting and procurement decisions.

If a donor wants to see rigorous cost evidence for a given program or intervention, it is best to integrate funding for this work into the research agenda at the request for proposal (RFP) stage. At the very least, an RFP can ask research teams to state which intervention activities will be included in the cost calculations and which data sources will be used to estimate them. In tandem, the research budget template should have designated line items for costing. A very rough heuristic of the cost of costing assumes the exercise will require about 10% of the total research budget. However, the complexity and scale of the intervention may increase the time and expertise needed for the costing exercise.

Strategy 2: Support systematic cost evidence for private use.

Donors that support impact evaluation research may wonder about the value of rigorously evaluating a given intervention (or set of interventions). A robust cost evidence base can help donors decide which types of interventions to prioritize in this regard. Once a donor becomes savvy at using cost evidence for their own internal decision-making, they may wish to support the standardization of tools and approaches for costing. These tools can then be integrated into project development at the RFP stage, for example by providing research teams with a set of standards, guidelines, and templates they must use for costing.

GiveWell, represented on our donor panel by Managing Director Buddy Shah, evaluates the cost-effectiveness of every charity it considers as a matter of routine practice. GiveWell credits cost evidence with helping them decide to invest $1 million in new CEGA research to estimate the long-term impacts of a school-based deworming program in Kenya. Given how much GiveWell has planned to spend on deworming activities in the coming years, the increased certainty about expected cost-per-impact was well worth the cost of this additional research.

Ultimately, donors may wish to develop their own internal capacity to analyze cost data by training staff in cost-effectiveness analysis (CEA) or cost-benefit analysis (CBA). They may also seek out ways to integrate cost data collection tools and methods into program-level workflows.

Strategy 3: Support targeted activities that generate public goods.

Rigorous CEAs and CBAs require specialized expertise and methods literacy in order to effectively plan, collect and analyze data, and interpret results. Anecdotally, most individuals develop costing expertise on the job, with mixed results. During our first Cost-ober panel, “Publishing Cost Evidence,” Rema Hanna (Professor of South-East Asia Studies and Chair of the International Development Area at the Harvard Kennedy School) explained that PhD econometrics classes rarely address costs. Rather, students spend their time learning about and thinking about how to measure benefits.

Targeted investments in the development of training and learning materials for costing can ultimately generate public goods and help with “field-building” if they are adapted for a wider audience of PhD students, researchers, policymakers, and professionals. In Spring 2020, CEGA developed and piloted a CEA/CBA training course for fellows enrolled in our East Africa Social Science Translation (EASST) capacity building program. Fellows are now paying it forward by integrating CEA/CBA into their own research plans, and into trainings they offer to researchers and policymakers in their home countries.

Strategy 4: Support systematic tools and methods for public use.

A more ambitious investment in the development of publicly available tools and methods for rigorous costing is needed to transform the way costing is practiced in global development research, and ultimately the way cost evidence is used for decision-making by policymakers in low- and middle-income countries. On our journal panel, Dean Karlan, President and Founder of IPA and Co-editor of the Journal of Development Economics, called for more work to improve the external validity of cost estimates and for systematic work on methods to help guide researchers on costs. He further noted that published cost estimates should give more careful consideration to context and transferability, and provide a more robust discussion of underlying inputs and assumptions.

The agenda for improving costing tools and methods and promoting their adoption and use by researchers already exists; similarly, CEGA has already established a dedicated community of experienced leaders in this space (including representatives from 3ie, J-PAL, IPA, IRC, Evidence Action, and SIEF at the World Bank) who are poised to contribute to this field-building effort. Still, there remains a considerable and urgent need for committed donors who can support the development, testing, and dissemination of these tools and methods.

There is good news: the standard of rigor expected for the cost paragraph — the one buried in the policy implications section of the published article — is about to get a lot more scrutiny. With recent special issues on cost coming from Development Engineering and the Journal of Development Effectiveness, more attention is being payed to the costing activities embedded in individual research projects. Meanwhile, CEGA staff are working hard to develop new standards and reporting guidelines for costing, which we hope — with support from one or more committed donors — to pilot and scale through our Community of Practice.

--

--

The Center for Effective Global Action
CEGA
Editor for

CEGA is a hub for research on global development, innovating for positive social change.