Kieran Hammond
beneficiary.io
Published in
9 min readNov 24, 2017

--

Beneficiary.io: A New Measurement Approach for Grant-makers

Vinod Rajasekaran suggested that social missions should be as inspiring as space missions. We believe that social missions should be executed like space missions. Calculated, time conscious and data-driven. Foundations should be like space stations: able to get live data updates from their grantees, chart progress over time and provide support from afar. This is the ‘moonshot’ idea that Nick Smith and I are working on at Entrepreneur First, a London-based technology accelerator that helps ambitious individuals build startups from scratch.

Imagine trying to keep track of the impact being made by hundreds of different projects at once, each one tackling complex, deeply interconnected issues. Now imagine trying to track that impact when the reports that you receive are of varying quality and veracity. You have imagined your way into the world of a grant manager; we believe that providing these actors with better tools could unlock a tremendous amount of impact.

There has also been some scepticism about whether the incentive structures of foundations are conducive to impact maximisation. However, there are several reasons why we think that focusing on foundations is a compelling opportunity to make significant impact. Foundations allocate a staggering amount of funding, so even equipping a small number with better impact measurement tools could result in a considerable amount of money being redirected to more effective interventions. Foundations might have rigid theories of change and areas of focus, but within those parameters, they do seem to want to measure and improve their impact. In a New Philanthropy Capital (NPC) report, Funding Impact, 88% funders said that they thought impact measurement made charities and foundations more effective.

There is also a significant trickle-down effect created by starting at the funding source.

NPC’s Making an Impact report showed that charities often only start down the impact measurement road under pressure from funders — even though when do, they find that it helps them improve what they do and therefore help more beneficiaries.

Our Initial Ideas

After speaking to over 50 major actors in the grant-making space, we have started to be able to identify some commonly shared problems. We’ve chosen to concentrate on building a product that could help foundations measure the impact of their grants in a more accurate, reliable and cost-effective way. We are particularly interested in making randomised control trials available to more foundations by reducing the cost of execution through automation.

Here are some of our observations and initial ideas:

Methodology

Methods used by foundations to evaluate impact typically aren’t very rigorous or scientific. Variables aren’t controlled and so, even if improvement is made, it’s hard to ascribe it to a particular grant or understand why it is making a difference. Knowing these things would be valuable to foundations giving large grants (100k+) and wanting to know whether to fund an intervention again.

Some of the biggest and most impact-driven foundations, such as Gates Foundation, use randomised control trials (RCTs) to measure their impact. RCTs are seen by many as the “gold standard” as they make it possible to isolate the effect of a program from complicating factors, even those that are unseen. In the words of Guido Imbens, “Randomized experiments do occupy a special place in the hierarchy of evidence, namely at the very top.”

Proponents such as the John Arnold Foundation have suggested that RCTs have wide applicability and could be used an order of magnitude more often. Our hypothesis is that high costs are a barrier to wider adoption for foundations. RCTs can cost up to $1,500,000; they are notoriously expensive. The question Nick and I are asking is whether it would be possible to make RCTs more cost-effective and widely accessible to other foundations who don’t have as many resources. We are particularly interested in how we can use technology to automate large parts of the process.

We hope to be able to streamline the process of running an RCT, specifically focusing on the costly and time-consuming aspects of recruitment; data collection; and data analyses. For foundations inexperienced in running RCTs, we hope to offer a guided process to allow them to investigate interventions — likely with pre-populated variables according to specific cause areas and with predefined statistical analyses run without the need for statistical knowledge. For more experience experimenters, our tool will be customisable to select different statistical methods and define more intricate intervention types, such as multi-factorial. In short, we hope to obey academic integrity in the execution of our tool.

A further question we are exploring is whether quasi-experimental RCTs are still an idea worth pursuing if they could implemented at scale and at a cost that would enable foundations to access them. Are there differences of a factor of 10 between interventions in any given cause area? If so, even a tool that provides an admittedly approximate impact assessment would presumably be quite valuable.

Truthful Reporting

Charities are asked by foundations to provide impact reports after they have received funding. The problem is that non-profits are incentivised to share only the most flattering subsets of data and bury more uncomplimentary information. Charities naturally want to justify the faith that has been placed in them by foundations and they may feel that truthful reporting about their failures would damage their chances for future funding. There is also just a natural human urge to remember what works and focus on positive impacts.

Without information from failures, foundations can’t learn about where best to allocate their funding or share that learning more widely. They can’t intervene and provide additional support where needed. Its a problem that foundations themselves recognise and nearly all funders (92%; NCP) believe that charities should be encouraged to report negative results.

By creating a feedback loop directly from the beneficiaries, our platform will be able to retrieve reliable data on impact and make it available to foundations. Charities will also benefit from this as they are able to listen and respond to the voices of the people they are serving. Organisations will then be able to publish their data into interactive dashboards and insightful charts, which will enable the, to attract donors, engage stakeholders and tell beautiful visual stories.

Measuring Impact Over Time

Over the course of our conversations, it became clear that many grantmakers are frustrated by the rigid and seemingly arbitrary timeframes by which they were having to manage grants. It is not uncommon for foundations to ask for impact reports at a set stage (usually every 6 months) and, which inevitably works better for some grants than others. Sometimes it is clear very quickly whether a grant needs additional assistance or the programme needs to alter its course. In other cases, there is a slower burn and judging the programme on its progress in just a few months does it a disservice. We believe that real time data on spending and impact could enable grantmakers to become more responsive, able to intervene quickly and make strategic adjustments. This is what we are hoping to achieve by automating the retrieval of information from quasi-experimental RCTs.

Grant managers often have considerable experience and expertise. Rather than judges, they should be empowered as collaborators. By being provided with accurate, live information on the progress of programmes, grantmakers would be able to support grantees with intelligent, timely input.

Over time, this data could be aggregated and used to provide insight on how different types of intervention make progress over time. This might enable grantmakers to move away from a one-size-fits all approach whereby every programme, regardless of its nature, gets funding for the same amount of time. The shared data could make tailored recommendations on the duration of prospective grants.

Shared Knowledge

One thing that we have thought about a lot is what we could do with the data set that would be created out of these impact assessments. Happily, there is some promising evidence of a burgeoning open data movement emerging in the philanthropic sector. 360 Giving is an initiative set up by Big Lottery Fund and Indigo Trust as a way to encourage foundations to publish their grants data in an open, standardised way. The strongly positive response it has received suggests that a large number of foundations are open to the opportunities presented by shared data. Geoff Mulgan of Nesta, has predicted that data will transform how the bigger funders work in the future.

Foundations are not competing with one another, so there are no fundamental reasons why they shouldn’t share knowledge. There are many examples of where foundations allocating funding in the same cause area have similar theories of change and in these contexts sharing information could potentially be useful. For instance, the Gatsby Foundation have similar aims and theories around research and discovery within science to the Wellcome Trust. Ford Foundation, Ontario Trillium Foundation and Foundation Scotland all have Youth Opportunities Funds. In the words of David Bonbright: “society solves tough problems when we collectively learn how to solve them.”

Foundations have operated in the same way for 600 years, with funding decisions being made by a few key people. We believe that future grant-making will be informed by crowdsourced knowledge. Shared data would enable foundations make predictive analysis on impact, de-risk decisions by tapping into collective wisdom, and identify opportunities for collaboration.

The foundations that we have spoken to have expressed some legitimate concerns about the reputational risk posed by sharing data: what if the information was accessed by the media? Would charities become concerned about the sharing of negative assessments? Would some foundations risk criticism from others by lifting the lid on their processes?

It has become clear to us that we need to find a way of sharing data from impact assessments in an anonymised way, so that there is no way to trace it back to the foundation assessing, or the charity being assessed. We have been thinking about whether it would be possible to aggregate the data and enable foundations to view the ‘wisdom of the crowds’, rather the details of particular analyses.

Our Concerns

Market Forces

For commercial businesses, customers and profits provide an incentivisation to improve performance, a way to measure it and a carrot and stick to guide decisions. Foundations do not have these same market forces acting on them. They could do a terrible job and they would still get no shortages of ‘customers’ wanting funding the following year. Is there enough pressure for them to want to improve their processes? Where does this pressure come from and how can it be leveraged?

Feedback Loop

The feedback loop for foundations is quite poor. It’s easy for grantmakers to fall into the trap of thinking that they are doing a great job, measuring impact accurately and making the best funding decisions. They will never know whether they could have made more impact if they had allocated funding differently. They will never know if the information on impact they have derived from charities is accurate. This often results in a misplaced faith in existing processes, which may make grant-makers reluctant to change them.

Reductionism

We are concerned that foundations will consider a tool for quantitatively measuring impact as reductionist (cf. Lankelly Chase Foundation), especially if respondents are asked a select few questions or are placed in the position of having to give fermi estimate style responses. Can this be overcome presenting the platform as a tool to be used amongst others in a more holistic measurement process?

Ethicality

The Education Writers Association provides a typical example of this critique, suggesting that RCTs in education are “unethical” because “if researchers really believe an intervention will improve learning, for example, withholding it from a control group of students could be seen as unjustified.” We don’t agree with this criticism but we suspect it will be a view commonly and tenaciously held by foundations. Is there any way we get around this? Perhaps by conducting the trials differently, or doing a staged roll out?

Predictive Data

There could be some issues with using data from the past to make predictions about the impact that a grant could make in the future; what was impactful last year might become irrelevant quite quickly in fast-moving fields.

Would there be a chance that predictive data would stifle innovation? Innovative programs have no track record and so would be likely to come up short in any analysis.

There has also been much debate about whether RCTs are too context dependent to offer much predictive insight into whether certain interventions work. William Easterly provides a typical example of this critique; he suggests that RCTs are “limited to small questions”.

Questions for the Foundation Community

Do you think that there is any value in providing less rigorous, quasi-scientific RCTs, if it results in a wider adoption in foundations?

What do you think we would lose by executing RCTs through technology? What are the most significant obstacles?

What is the most effective method for getting participants for RCTs? How can we incentivise them?

Work with Us

I hope this post will elicit several kinds of response:

  • From organisations who are doing amazing work in impact or have been using RCTs — to share your work and your learnings with us and the rest of the community. Which tools have been effective and which haven’t?
  • From academics who are familiar with this space, to advise us on some of the complex moral, economic and behavioural questions that we are trying to navigate. We are also actively seeking an official academic advisor.
  • From anyone involved in foundations — to give us the opportunity to talk to you about your experiences in the sector. We are also actively seeking organisations to do trials with, so that we can validate our thinking.
  • From innovators and practitioners, intermediaries and brokers, with suggestions as to how experiments might be done, good examples to copy or open source tools to adapt.

We would also be really interested to hear from anyone who could provide us with some thoughts on our ideas and constructive feedback. Please feel free to leave comments or send me an email at kieran.hammond20@gmail.co.uk.

--

--