Donors Can Spend More Money Better with Cost Evidence
Charitable organizations are looking for ways to spend record levels of philanthropic donations. Cost evidence can make their money go farther.
CEGA Technology Program Manager Sam Fishman describes how CEGA’s Cost Transparency Initiative (CTI), which promotes the generation of more and better cost evidence and transparency of cost data, can identify better opportunities for social impact funding.
Despite a global pandemic and consequent recession, 2020 saw a record level of charitable giving in the US–$471 billion, representing a 5.1% increase from 2019. For many philanthropic organizations, growing donations have been a pleasant surprise. However, growing budgets also put pressure on donors to find attractive investment opportunities quickly.
Many of these donors are looking to cost-effectiveness and cost-benefit analyses to decide how to allocate resources (for example, organizations like GiveWell, IRC, USAID, the Gates Foundation, and GiveDirectly, among others, actively generate cost evidence). But decision-makers need high quality cost and impact evidence to make well-informed decisions about the relative cost-effectiveness of alternative investments. While many organizations, including CEGA and our peer research centers IPA, J-PAL, and 3ie have been working for over a decade to build the impact evidence base, good cost evidence remains lacking in impact evaluation research.
Recognizing that every donor has their own internal constraints (such as capacity to review evidence and identify worthwhile investment opportunities), CEGA believes that by significantly bolstering the evidence base on intervention costs, donors could much more quickly and easily identify cost-effective opportunities for investing burgeoning charitable contributions.
Through our Cost Transparency Initiative, CEGA promotes the generation of more and better cost evidence and transparency of cost data, as one route to identifying better opportunities for social impact funding. We seek to make three key improvements to the cost evidence ecosystem: 1) expand cost evidence production in new sectors, 2) produce larger, and more comparable datasets with cost evidence, and 3) diversify cost methods.
Filling this gap isn’t just an academic research objective–it can change how philanthropies identify, and choose, investment opportunities.
Support Cost Evidence in New Sectors
Many of the most cost-effective interventions we know about are in the health sector. For example, five of GiveWell’s top six recommended charities are health interventions (bed nets, malaria treatments, deworming, immunization incentives, and vitamin A supplements) where as little as $3,000-$4,500 can save a life. Besides the fact that saving a life has a particularly high value, the bias towards global health can be attributed partly to the fact that health interventions are more likely to have produced cost-effectiveness data in the first place. The health sector has a few decades’ head-start over other sectors in rigorously measuring the cost-effectiveness of interventions, in part because major health institutions have effectively integrated cost-evidence production into their decision making processes. With the notable exception of education (where improvements in evaluation methods and metrics have also emerged), cost evidence still lags significantly behind in other sectors.
Health interventions will likely maintain advantages in cost-effectiveness due to the high stakes involved. However, if there were more cost-effectiveness evidence in other sectors, there might be more investment opportunities on the menu for donors with broad sectoral interests. For example, other highly cost-effective programs might emerge in information and communication technology, governance, agriculture, and elsewhere that could give donors more options in any given year to invest in scaling exciting programs.
Build Bigger Cost Datasets
One of the challenges donors and other decision-makers face is a lack of large, comparable datasets. Most of the cost data that exists in LMIC settings — which is typically pulled from program expenditures, monitoring and evaluation data, and institutional knowledge — is collected by implementing organizations using idiosyncratic finance reporting systems, making it more difficult to make comparison of similar programs implemented by different organizations (additionally, many organizations have an incentive to distort costs to highlight efficiencies). But fixing these issues and building comparable datasets is critical. Cost-effectiveness analysis (costs and outcomes) and cost-efficiency analysis (costs and outputs) are both inherently comparative exercises, so the more data points we have and the more standardized the collection process, the more relevant insights we can derive.
One key way to address this challenge is to increase coordination in the production of cost evidence among the producers of impact evaluations. CEGA’s Costing Community of Practice (CCoP) is in part an effort to increase coordination between key producers of cost evidence in LMICs by generating regular discussions around comparative approaches to cost collection and analysis.
The rewards to larger cost-effectiveness and cost-efficiency datasets aren’t only about offering more options to choose. More data points for comparison can provide a more nuanced understanding of how cost-effectiveness or cost-efficiency might change at different levels of scale, over time, across contexts, and across design features. For example, during our Cost-ober 2021 workshop, the International Rescue Committee (IRC) shared a cost-efficiency dataset on the costs of cash transfer programs that was large enough to allow IRC to see how factors like delivery method, targeting, scale, design, and geographic context contributed to cost-efficiency of the program.
Diversify Methodologies For Comparison
There is no one-size-fits-all approach to comparing the costs and impacts of alternative interventions. Donors can expand their investment options, and invest more effectively, by leveraging the most appropriate types of methods for comparison to support the specific decisions they are making. CEGA is working to expand this toolbox, and ensure that costing methods are better tailored to organizations with different interests, philosophies, revenues, and investment timelines. Two of the approaches we’re exploring, which were highlighted at our Cost-ober event in 2021, are described below.
A lot of development programs claim to be cost-effective, but cost-effective compared to what? Cash benchmarking asks “If an intervention isn’t at least as cost-effective as cash — which is easily distributed and impacts a wide range of outcomes — then why even bother investing in it?”
The cash benchmarking approach puts cost evidence into an intuitive systematic framework for decision-making. While not appropriate in all settings, organizations that might genuinely consider cash as an alternative investment vehicle are probably best suited to using cash benchmarking for evidence-informed decision-making.
Portfolio-level Cost Analysis
USAID has experimented using Social Rate of Return (SROR) for evaluating the Development Innovation Ventures (DIV) Portfolio. They found that four innovations drove the whole portfolio’s success, producing $5 for every dollar invested, for a 77% social rate of return (SROR) for the portfolio. CEGA is now exploring a similar portfolio level SROR approach with another donor exploring scalable innovations.
This approach hedges against the risk of failure by diversifying investments. It also helps donors compare within the portfolio to improve decision-making around investment choices and scale-up choices.
Expanding value-for-money horizons for philanthropy
We want to help organizations spend their excess resources as cost-effectively as possible. More quality cost evidence would expand the potential array of attractive investments for donors. It would also help provide the groundwork for exploring programs with diverse time-horizons (short- and long-term) and different outcomes (not just health). However, if donors and researchers keep measuring impact without attention to costs, they won’t have the evidence they need to confidently increase their yearly budgets and get money out the door cost-effectively.