Who Wants Digital to Do What in Development Impact Bonds?

Alexis de Brouchoven
Frontier Tech Hub
Published in
4 min readNov 24, 2022

--

By Tom Rintoul

FCDO, and before that DfID, have long been a leader in using payment-by-results in international development.

But as discussed previously, this requires robust monitoring and verification of the results to be paid for. For example, on an employment project, payment might require proof that a person:

  • Was eligible for the service;
  • Received it;
  • Went on to get a job within a certain date range;
  • Retained the job for a certain length of time;
  • Got paid above a threshold salary throughout that period.

The monitoring and verification processes required to deliver that gold standard of proof are often cumbersome — some providers and investors report that in their project impact is delivered but cannot be billed for, that there are unacceptable delays between an outcome being delivered and paid, and that the cost of all this draws funds away from service delivery.

So FCDO have been exploring how new digital technologies might help deliver the high standards of evidence which we are becoming used to, without so many of the drawbacks.

As part of that, they recently ran an open call for Development Impact Bond projects to highlight their own ideas for using digital, with one of them to receive a grant to pilot their idea.

Six projects applied, and here’s some of what we learned from interviewing them:

1. The Challenge of Verifying Results Can Be Huge

One project estimated that 40% of the jobs (their key paid outcome) they achieve are never paid for because they can’t collect the necessary evidence.

Two cited delays from the (job entry) outcome being achieved to getting paid, because verification takes so long.

One claimed errors by the verifier in 5–10% of cases.

That’s not to say that we know whether any of these could be addressed by technology, but the first step in solving a problem is understanding the pain points.

2. Results Verification Looks Like a Job for Simple Tech

In many cases it appeared that the systems being used to manage collection of evidence, to share it with verifiers, and to track how long verifiers took to do their jobs, were deeply analogue.

Leaving aside anything cutting edge, one hypothesis we surfaced is that low code platform, with simple case management, file sharing, and process automation functionalities could be a huge time saver whilst giving the parties to an outcomes-based contract visibility of how well verifiers are performing. We’ll be on the lookout for an opportunity to set one up, and to share how to repeat it.

‘Verification’ generally means verifying that the correct document has been provided, and that it is authentic. A few people suggested full automation, generally referencing machine learning as a technology which might deliver this. Can we teach an algorithm to check these documents?

But despite recent improvements in the cost and reliability of this technology, initial scoping suggests that the prospects are not good in most DIBs. This is because they are typically very small programmes at the moment, each producing a small corpus of verified and rejected results, and there is often a huge variety of documentation which could theoretically be acceptable. Not a great position from which to train an algorithm to do the job.

3. Beneficiary Identification is Also Key

Providers need to verify that an outcome has only been claimed once in respect of a specific person, and that the person is a member of a group who are eligible for payment. Understandably, funders often want proof that an intervention is reaching more marginalised groups.

But identification can be difficult. Like the UK, many countries eschew mandatory identity documents. Services which are free at the point of use have little reason to issue unique ID numbers. In many countries people do not have a single form of their name, whilst in others mobile phone numbers are often shared and changed.

One of the ways that Triggerise, a verification technology partner, has been tackling this issue is through face recognition and voice ID. It will be fascinating to see what benefits those are to other programmes, and how they navigate the technical, ethical, and service design challenges of implementing them.

In other countries, governments and service providers (notably banks) were already investing in digital ID programmes which may solve the same problem.

4. Few organisations are able to audit one another’s technology even when relying on it for outcomes verification

Where technology is being used to handle monitoring and verification, verifying that the tech itself is effective in measuring what it claims to becomes a technical task.

This changes the skillset which funders need in order to audit a project, and evidence from early adopters is that funders may not be ready.

We’d be interested to hear from people around the sector on the same issues:

  • What are the problems you experience with monitoring and verification?
  • What ideas do you have about technologies which could be useful?
  • What has been tried and failed? Why?

Please get in touch here

Image courtesy of Unsplash

--

--