A New Public Tool for Planning Cost-Effectiveness Research

Rigorous research on cost is needed to generate reliable and transparent estimates of cost-effectiveness — but a high quality cost study has to be planned prospectively.

The Center for Effective Global Action
CEGA
5 min readMar 28, 2024

--

CEGA Program Scientist Liz Brown launches a new Costing Pre-analysis Planning template to foster collaboration between cost and impact evaluation research. This work was developed by CEGA’s Cost Transparency Initiative and we welcome your feedback.

A young woman sells tomatoes in a local market using her mobile phone to process the transaction. | Credit: Confidence Nzewi via Dreamstime.com

Just one in five impact evaluations include any type of cost evidence (Brown & Tanner, 2019), which means that decision makers frequently lack key information to compare per beneficiary cost-per-impact across studies. CEGA’s Cost Transparency Initiative has developed a new tool for researchers to help generate a rigorous estimate of the cost-effectiveness of an intervention, improve confidence in their findings, and provide actionable costing insights for policymakers. The publicly-accessible Costing Pre-analysis Planning template aligns impact estimates of an evaluation with final cost estimates, generating a rigorous and transparent estimate of cost per impact per beneficiary at the close of the study.

CEGA developed and field-tested our cost pre-analysis planning tool on five cash-benchmarking impact evaluations completed in 2023. Throughout this engagement, we encountered wariness about the reliability and reproducibility of cost estimates and uncertainty about the underlying need for cost “research” among academic research economists. CEGA now routinely completes Costing Pre-analysis Planning templates in advance of conducting cost research, with costing-pre analysis plans in place on three active impact evaluations, Lishe Bora, Titukulane, and Takunda.

The Costing Pre-analysis Planning template is designed using LaTeX so that it can either efficiently integrate with the pre-analysis plan (PAP) of an impact evaluation or serve as a standalone draft of a costing report that is fit for publication. The tool includes a structured outline and prompts to motivate the cost analysis, define the primary and secondary cost research questions, and identify the cost-relevant features of the intervention design and delivery.

Cost pre-analysis plans document cost study designs and transparently share data, methods, and planned empirical analyses. As with PAPs for impact evaluations, costing PAPs minimize the risk of “cherry picking” specifications, such as narrowly-defining cost inclusion criteria that do not reflect the full cost of delivery or that only produce interesting cost-efficiency metrics. The tool improves collaboration around the use of sensitive and highly detailed expenditure data and the chances for better partnerships with decision makers on their cost research questions.

“Cost Is The Easiest Part” — A Common Misconception

It may seem straightforward to calculate the cost-effectiveness of an anti-poverty program: simply divide the total budget of the evaluated program by the number of households served, and compare it to the benefits per household. But there are several drawbacks to this approach:

  • Expenditures in accountancy do not capture all relevant costs. There can be large deviations between budgets and actual expenditures — both in terms of the total cost and how resources are allocated across activities and time. Generally, actuals are the gold standard of cost data. However, the accountancy of the primary implementing partner(s) is a key source of data for this exercise. It has all the hallmarks of administrative data with some well-known advantages and limitations. We find that careful review of the chart of accounts, interviews, and supplementary data are needed to catch mis-classified expenses, to disaggregate “catch-all” categories of cost, and identify incompleteness.
  • The development intervention and the development program may not be identical. Careful identification of treatment activities is needed to align the costing with the intervention that is the subject of the impact evaluation and the specific beneficiary population(s) of the evaluation. For example, our costing of a complex, large scale, bundled program of activities during a five-year intervention targets a subset of activities within the more ‘complete’ program. This well-established program has its own theory of change and targeting criteria. Our work isolates the costs of the intervention activities apart from the broader program.
  • Different research questions require different cost data. It is not always evident which costs should be included in a cost evaluation. The criteria for inclusion or exclusion will depend largely on the research question at hand. For example, disentangling the cost of bundled intervention activities requires the identification of separate inputs for each component of the intervention. In this case, using the entire program budget as a proxy for cost would bias cost per beneficiary estimates by incorporating costs that are irrelevant to the research questions of concern to decision makers.
  • Cost should match the estimand. To be valid, the estimate of cost per beneficiary should match the estimand that is the focus of the impact evaluation. Attention to the cost implications of sample attrition and non-compliance is needed to improve the accuracy of cost per beneficiary estimates.
  • Implementation fidelity and costing accuracy. The real-world challenges of implementing a randomized evaluation complicates its costing. For example, a common assumption is that control groups incur no cost. However, this assumption fails when there is contamination between treatment and control groups. It is important to proactively plan the costing and track the intervention’s implementation fidelity so that we can quickly identify anydata challenges that affect the accuracy of the costing.
  • Variation in cost per beneficiary. A simple estimate of budget per beneficiary ignores a lot of important variation in cost per beneficiary. Beneficiaries who are more remote or more disadvantaged may incur a higher cost to reach. It might also prevent understanding how specific activities contribute to cost. A pre-analysis plan may help to plan out how to identify these differences.

A completed costing pre-analysis plan template explains what the costing involves, how it fits with the impact evaluation research, and what data is needed to make it operational. The structure of this tool facilitates coordination not only within evaluation research teams but also with implementing partners and donors.

Leaders need cost evidence to support the scale-up of effective interventions. For researchers to provide these data and insights, the field needs to develop and disseminate more open and transparent high quality tools for costing. We hope this new tool accelerates these efforts. Please join us by getting involved in the Costing Community of Practice.

--

--

The Center for Effective Global Action
CEGA

CEGA is a hub for research on global development, innovating for positive social change.