Ask anyone in this business: the aid industry in general and donors in particular have a very low tolerance to risk. Donors will always award the majority of their money to Low-risk programs implemented by solid, low-risk partners. Innovation, particularly involving technology, is high risk.
In fact, don’t ask anyone — just look at a typical donors’ portfolio. Innovation — usually folded in together with Technology — is treated as a separate topic, to be funded through its own, very small mechanism and to deliver against “innovation” goals. Innovation for the sake of innovation, basically:
Funding decisions, management and evaluation are all done in isolation from everything else and the whole thing is anyway a rounding error to the juggernaut of development.
I know people will challenge this. They will say this has changed and that these days innovation has been “vertically integrated” and that modern funding mechanisms actually require some sort of innovation. That’s kind of true, actually. Alas it doesn’t really change things because “vertical integration” in practice means that large implementation consortia bring on board token “innovation partners” to pass muster. That ensures that everything remains as it always was — the majority of the resources go to the same old implementers who implement the same old activities and churn out the same old reports. The “innovation” or “technology” partners are involved as far as it is required to provide a few trainings or roll out some technology or another — usually an app:
Well, I believe such arrangements are far from “low risk”. In fact, I believe this is the single highest risk our industry faces at the moment. Why? Because essentially — and this has been the case for long decades — large funding mechanisms are highly complexified platforms that purchase highly elaborate reports written around the same topics. “Innovation” in this context is nothing more than a commoditized service to be plug-and-played to standard consortia.
Around this reality the proverbial Poverty Inc. has thrived, ever eager to project safety and low risk. The low perceived risks in this arrangement are mostly because there are no surprises — spending pipelines are consistent, there are no changes of plans, reports come in on time. However, the actual risks — the ones that should really matter — are plenty:
- The risk of reinforcing (rewarding) bad or unproven habits in our industry. Trainings, technical assistance, capacity building, those sorts of things. I arguedelsewhere that this is not an accident. The fact that these activities are favoured over others, potentially more impactful, has everything to do with predictability, easiness to forecast expenses and, indeed, low perceived risk.
- The risk of low/ no impact. Trainings, communication campaigns, capacity building, technical assistance. No-one knows for sure what the impact of all these activities actually is, nevermind the exact relationship between cost and results. Yet, they are part of the aid orthodoxy and no-one really wants to be bothered to challenge them. No donor agencies’ staff is willing to pick up this battle within their own rigid structures and definitely no incumbent aid implementer would do it, as the most successful of them are well geared towards earning and managing awards by these rules and the risks to them are basically non-existent.
- The risk of missed opportunities. Including opportunities to learn. Every copycat project that gets funded means a potentially impactful/ insightful one that won’t be implemented.
- The risk of losing touch. In a day and age when real-time data systems are taken for granted in most other industries, in our industry there is a delay of months, even years between an activity and its results presented within the framework of that one project (nevermind a broader context). This makes it impossible to optimize investments — by even the lowest available standards — and it gives implementers time to rationalize their failures. Don’t believe me? What is the last project you have heard about that got defunded because of failure to deliver results? What was the last thoughtful, intelligent analysis of a failed project you read, written by the implementers? When was the last time a capacity building project actually triggered a conversation about what works and what doesn’t?
- The risk of becoming (even more) detached from the target audiences/ clients. Even while people in the communities targeted by these projects change fundamentally and continuously — around how they consume information, but also around how they relate to the larger world around them — large projects stubbornly ignore these changes. The gap is very big as it is, and it grows daily. Here is a cliche example: social media truly took off globally between 2010 and 2012. Across the African continent social media penetration grew by an average of 10–17% every year since (source: http://wearesocial.com/). At the time new projects about interacting with youth were starting that were put together before social media wasn’t even a thing. These projects are slowly finishing now, and many of them were completed by plan which means their strategies were either oblivious to social media or, best case scenario, social media was an afterthought.
- Operational Risks: Traditional field operations are often plagued by inefficiencies, political meddling (in country), dodgy evaluation practices, fraud. This is simply the nature of field operations, not a failure of implementers, as such. However, making implementation less vulnerable to these realities using technology and process innovation would actually be straight forward. Just by tracking money/ stocks/ activities in real time and automating repeat processes, most of these risks are significantly reduced. This is also something that is standard in other industries, so theoretically proven/ low risk. Additional cost cutting would come in handy as whole back-office departments become redundant and more resources can be channeled towards actual implementation.
Sadly, there are virtually no incentives for industry incumbents to challenge these realities. The universal expectation of “low risk” approaches means that they can gear themselves towards winning and managing large awards without too much pressure to fundamentally modernize. By promoting highly predictable but hard to evaluate practices they insulate themselves from the volatility of a fast-changing world. Even better, the fact that innovation arrangements are designed and executed in isolation also plays to their advantage: if an “innovation” component succeeds, it is to their credit. If it fails, they simply throw the “innovation partner” under the bus and keep going without the slightest need to reflect or change. Ironically, this reinforces the fallacy that innovation is risky.
Ok. So what can be done about it?
Here at Triggerise we are playing a long game. While we are occasionally acting as innovation partners on highly selected awards, we mostly try to promote a radically different approach. Our idea is to allow donors/ investors to actually “purchase” impact rather than reports. This is essentially the role of tiko — our rewards platform for positive behaviour. People in target communities earn tiko for exercising a positive behaviour (vaccinate a child, send a girl to school, use renewable energy etc.). They can spend them in the local market as they please — think air miles for positive behaviour. This not only makes positive behaviour aspirational, it also feeds the cash-starved local economy creating real growth opportunities locally.
Additionally, since all data around tiko earning and spending is real-time, it allows us to learn very quickly about what works and what doesn’t and how exactly investments convert into behavior in every individual case. This allows us to estimate the “cost” of such behaviour fairly accurately and make these costs available to investors/ donors.
That means a donor could in effect purchase desired behaviour by simply underwriting part of this economy. Results would become visible instantly and said donor could evaluate the impact of their investment day by day and decide to increase/ decrease investments accordingly. On the ground, our people would be 100% focused on achieving results, rather than manning back-offices, managing donor relationships and/ or writing long reports.
You must agree this approach is significantly less risky than the traditional one. Or am I wrong?
(A slightly modified version of this post also appeared on Linkedin)