How to Remove Middle Men in Impact Delivery: Distributed Impact Verification
The weakest link in the wider impact industry (that includes aid/ development as well as impact investing) is the lack of any reliable impact verification.
Aid delivery is a black box. No-one knows what exactly the outputs of any activities are and if and when impact happens and/ or what exactly are the correlations between activities implemented and impact and how we can optimize impact.
Here are some of the direct consequences of this lack of verification:
- A whole industry has evolved to capture a significant slice of the precious resources available for impact. It trades in mostly spurious, process-based data and creates a false perception of visibility/ transparency. Even worse, this industry feeds the whole space with false insights, dodgy assumptions and a horrible noise-to-signal ratio;
- The typical impact partnerships are some form of centralized trust mediation. Donors prefer to fund large incumbents who provide them with reassurances that money is spent well. These large incumbents are geared for donor management so they team up with smaller incumbents who are more operational and so on until most of the resources are gone, passing frustratingly little value in the target communities;
- Setting up any funding/ investment mechanism takes a long time and only a small segment of donors/ investors can influence implementation design and can afford to invest in implementation that will only be evaluated 5–6 years down the line. By the time evaluation actually happens, all resources have been spent so any operational insights that may come out of it remain theoretical. Also, evaluations are aggregate efforts that will not be able to outline specific events & relevant correlations/ attributions between inputs and outputs.
This problem of reliable verification must be solved and can be solved. But only if we forget everything we know about traditional monitoring and evaluation. To solve it we must abandon centralized principles and commit to some form of distributed consensus. Here is how:
More signal less noise
The first step is stopping the collection of any human-generated data. Badly designed NGO forms filled in by overworked, underpaid people who have better things to do. Government forms robo-filled one day before collection. Boxes full of papers awaiting entry into some 90s-era static database. Stuff like this is the main source of noise in this industry, and a huge cost factor. Before we do anything else, we must stop the noise.
Then, we should start looking for event-generated data, where the signal is stronger. This is easier today than it was yesterday and you can bet it will be easier tomorrow than it is today.
The highest quality signal comes from machine/ sensors. Smart sensors in solar panels & meters. Sensors in smartphones and smart devices of all sorts. Satellite images. This sort of data is very useful and its application is obvious. But most impact events happen in environments where machine-generated data is simply not possible today. A child goes to school. Someone vaccinates a baby. Someone switches to sustainable agricultural practices. Someone protects a piece of mangrove forest. Someone recycles.
These sorts of events carry immediate impact, but they happen at high frequency in very diverse, remote locations, all over the world. When such activities get funded traditionally, any assessment of whether they happened or not and at what scale is done through theoretical models built around inputs from “trusted” third parties (such as community based organization and/ or independent evaluators). The results are never fast, and often biased.
Let’s take one example: let’s say an investor/ donor is funding a program aimed at educating girls. This program involves building schools, hiring & training teachers and ensuring that girls actually attend school.
In the current model, an organization (trust mediator/ gatekeeper) is selected to manage this project. They receive all the funding & they hire suppliers to build the school, train the teachers & design the curriculum. They fund outreach into the community and ensure forms are filled by the school staff to certify attendance etc. Every traditional development program is some version of this. No-one knows if any other approach would yield better results and in the rural areas in the global south underused, collapsing schools built decades ago as part of programs just like this are not unheard of.
What if we take the cost of these programs (total money invested by donors/ investors) and rather than look at it as Cost of Delivery, we look at it as an imperfect Valuation of Impact. i.e. the donor/ investor community values the potential outcome of that project at so-and-so much EUR at that point.
Now, starting with this valuation, we can start betting on the economic assumption that there is a correlation between education and wealth. If this assumption is true, the future valuation of this impact should be higher than the present valuation (=profit).
Now. What would happen if instead of funding intermediaries (contractors, suppliers etc), we would pass 100% of this value directly into that community, conditional of verifying some combination of pre-agreed impact events (X girls attending school during the next 5 years).
Proof of Impact
There is probably more than one way to do this, but just as an example, here is one of the ways in which we could verify impact on a distributed network:
Members of that community with smartphones act as verification nodes. This includes girls in the community, parents, teachers. They are prompted inside an app to take pictures of their environment at random intervals. Teachers take picture of their class. Students take pictures of their colleagues & teachers. Every picture taken will have GPS location & a time-stamp. The app used is nothing more than a token wallet with a verification feature. People n the community value the token (which is essentially cash).
This gets a bit technical but bear with me: pictures are uploaded on a global distributed network that could include millions of people globally — essentially nodes on the network. All these people have a token wallet on their phone where they store & trade value, make payments etc. They have opted in to act as verification nodes. Users answer simple questions about the pictures (“do these students look engage?” “how many girls do you see in this picture”?). The more powerful nodes can run AI/ ML scripts that evaluate patterns on these pictures. Eventually, a consensus of impact is achieved, which unlocks payment for the community & a small verification fee for the verifiers (who act as miners on this network). It all gets written on a block. This process can be optimized along proof of stakes principles and/ or adding machine-generated data as available.
Obviously, each type of event that needs to be verified can be designed/ customized differently, along the same principles: distributed verification, facilitated by transactions. Sometimes verification is easier (a signal from a sensor), other times it is a bit more complicated, but overall a lot can be verified this way. And it’s worth trying: billions in costs would be converted to value. Every year.
There is more
The verification itself becomes an ingredient to optimize existing products and/ or unlock opportunities to define new categories of products. With reliable verification it would be possible to structure insurance without an insurance agency: premiums get charged in token & locked in smart contracts. If insured event happens & is verified, smart contract activates and payment happens automatically.
The fact that impact verified unlocks a payment instantly means that the perception of the value of impact in that community increases (i.e. it becomes aspirational & desirable); Conversely, anyone, anywhere in the world can fund units of impact (basically investing in tokens) and perhaps make a return on them. Eventually make impact tokens a regular part of their investment portfolio.
Boom. Impact becomes a liquid store of value.