Want to make Aid more Effective? Demand Impact Visibility over anything else

Transparency and visibility in development is a big deal. And so it should be. Ca 150 billion USD are spent on aid/ development projects every year globally. Most of this money is public money — taxpayer’s hard earned cash — and much of this money enters the budgets of preciously few very large incumbents.

Here’s an example: 50% of United State’s Development Agency’s (USAID) awards by amount in 2015 went to 5 organizations (in fact they go to 4 organizations, as one of the 5 recipients is basically a venture of one of the others).

Given the amounts (and the sheer size of these awards), those 5 recipients, as well as any others, better be damn transparent about spending that money. And so should anyone else who spends even a single dollar of Aid money, for that matter.

Transparency?

But what does that mean? What is “transparency” at the end of the day? And how much of the current efforts for transparency provide useful insights into what is going on? And also, what are the things that we — the taxpayers who fund these organizations or the donors who decide to found these organizations rather than others — should be looking for?

As things are currently, transparency usually refers to organizations making high level financial data public. That means that the public can see spending categories. Things like “Staff”, “Commodities”, “Operations”, “Overhead”. With that, we go ahead and define a bunch of metrics, such as Operations cost : staff cost ratios (embraced enthusiastically even by some governments who go and make such ratios compulsory deeming any organization with different ratios as “illegal”) or overhead to total budget ratios.

Unfortunately, these indicators do not tell the whole story. In fact they hardly tell any story. Not only are they easy to game by the organizations themselves, they also limit necessary long-term investments for these organizations, increasing the costs of delivering aid in the long term. They also favor tried-and-tested project structures that benefit incumbents and hamper innovation, reducing the overall effectiveness of this whole industry.

Think about it: we are judging the performance of an organization whose objective is to address poverty or a disease by random and irrelevant indicators such as the size of their overhead or the percentage of the total budgets that this organization spends on (invests in?) things like administration. This is non-sense. It tells us nothing relevant about the organization in question and it perpetuates bad accounting, bad project design and bad organizational habits.

Eventually it distracts from the only question that matters: What exactly has been achieved with that money?

This would be like the investors in a private company only studying the quality and price of the furniture in the offices of that company and not worrying about its product or, you know, whether or not that company is making a profit.

Aid spending and this business of delivering impact is unfathomably complex. It can’t follow a pre-described recipe. It takes risks and it takes mistakes and it desperately needs a bit of innovation.

Outcomes themselves (impact?) should be more important than inputs (overheads?). And in this industry — as in any other — well run organizations are more likely to deliver good outcomes than badly run organizations. And a well-run organization needs to make investments in staff and business tools that may look expensive initially but pay dividends over time.

And those 5 (or were they 4?) organizations in the chart above? Most of them are being awarded funding in areas as diverse as Health, Infrastructure, Agriculture, Technology, Education and who knows what else. They may or may not be good at delivering outcomes in these diverse areas (who can know?) but you can bet your bottom dollar they are really, really good at hitting those staff-to-operation spending ratios.

Anyway — most people in this industry understand this stuff and there have been smart people out there advocating for change for a long time. There are also more and more enlightened donors that make their funding available unrestricted, investing in outputs, promising ideas and competent teams. We need more of them.

But you know what? I need to also defend the incumbents and the traditional donors. Measuring impact has traditionally been hard and time consuming. At the end of the day donors need to take decisions on what projects and organizations to fund and these cost ratios are as good a shortcut as any other to taking these decisions. A superficial proxy for good governance, as it were. At the end of the day, everyone wants to minimize waste and cover their backs.

Enter Technology

But these days things are different. My organization, for example measures impact in real time. We basically verify impact behavior as it happens within our networks — say for example the moment a pregnant woman attends ante-natal consultation — and track these Verified Impact Behaviours (VIBs) in real time. Technology makes this possible and this approach allows us to have very high visibility in the outputs of our work (more instances of verified impact behavior, more impact).

We can also customize (or allow a donor to customize) VIBs by configuring variables such as target demographic, location, type of behavior, frequency or order of events etc.

This is the sort of transparency that makes sense. Transparency of impact. Of output. Of what that money invested is actually buying. Demanding impact visibility would allow a donor to simply focus on output (and define steep targets if so inclined) and then sit back & watch impact happen (or not) on a dashboard, allowing implementers to decide how best to spend and invest the resources they have in order to maximize impact. The donor can stop worrying (and spending valuable resources) on calculating and auditing overhead ratios, and instead simply focus on output and simply increase or decrease their funding (investment) according to results.

This would also level the field and smaller, nimbler, more innovative implementers would have a chance to roll out innovation, without having to play politics with the incumbents.

Sure, I get it. Not everything can be measured that easily. And you know what? Maybe for some type of projects you need to work with incumbents and use less reliable indicators to estimate outputs. But for most of the projects today’s technology makes it possible — if not very easy — to measure real outputs, in real time.

It is time donors and organizations demand/ expect this.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.