Why development organisations should heed lessons from India’s cash-for-cobras crisis

James Wilkinson
The Challenges Group
5 min readFeb 14, 2019

Challenges has almost 20 years of good practice to draw upon, giving us a critical understanding of what works, what doesn’t, and why it’s so important to be able to measure any impact with a critical eye. By James Wilkinson of Challenges Zambia

The story might be apocryphal but its message is no less true. On the verge of a cobra infestation in Delhi, the-then British colonial government came up with a cunning plan: to offer a bounty for killed or captured cobras. At first it worked well. But it didn’t take long for local entrepreneurs to start breeding cobras and exchanging them for cash. When British officials found out, they halted the cash-for-cobras scheme, prompting the snake-breeders to release their now worthless cobras onto the streets of Delhi, causing a cobra infestation.

What had seemed like a great idea at the time had made the problem far worse that it had been originally. Good intentions had in fact made the problem worse.

The Cobra Effect — potentially apocryphal but very powerful

This is known as the “Cobra Effect”, a term used in psychology, politics and economics, and which perfectly illustrates how an ill-thought-out solution can do more harm than good. This law of unintended consequences is something we’ve seen too often in international aid and development, and it is why careful measurement is so important to us within The Challenges Group.

Within the realms of academia and social science, the Cobra Effect is a striking example of Campbell’s Law, defined by the American social scientist Donald T Campbell, who wrote:

“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

Donald Campbell applied this law across a variety of fields and concepts, including education. He argued that if you taught children for test scores alone, you risked distorting the educational process into something less valuable. Critically, he also applied it to programme evaluation.

We know that governments, multi-laterals and development agencies need to measure and evaluate the projects and interventions they design and implement, and rightly so. However, what they measure and how they evaluate can have big implications, such as:

  • Focusing on improving one metric (e.g. job creation) to the detriment of another (e.g. gender parity)
  • Encouraging us to deliver on outputs (e.g. number of training sessions delivered) rather than outcomes (e.g. purchasing power for smallholder farmers).

These approaches can cause a negative impact on the overall system/market where we are intervening through “local optimisation”. In short, by trying to benefit one part of a system, we may struggle to improve the system itself and may in fact cause it to operate less effectively.

Increasingly, more and more organisations talk about how they are “results-based” or “outcomes-based”. This is great as a principle — we encourage people and businesses to be aware of their own impact. However, at Challenges we think the development sector must go further in order to avoid having their own spiritual “cobras”. The outcomes targeted must be part of a holistic approach to improve a system — be that a sector of the economy (e.g agriculture) or a local area.

We suggest monitoring “unintended consequences” as well as intended outcomes. Currently, this is simply not the norm. Research from Radboud University estimated that USAID, the world’s largest Development Organisation, only evaluates unintended consequences for 15% of its programmes. We’d be surprised if USAID was an outlier in this regard. This prevents a significant risk that development programmes may create new problems that are not being planned for, or simply do the reverse of what they intended. Although some impacts and results are very hard to predict, at Challenges we believe more can be done.

Campbell himself agreed that the issue of evaluation was addressable, and that the examples he cited could have been implemented better. Below are a few suggestions from Challenges on how we can ensure we measure effectively in the development sector:

  • Be less linear — long-term programmes designed with little opportunity for adjustment and change can lead to us being tied down to bad measurements. A more iterative approach such as Lean Startup / Impact can help move to a focus on continuous improvement instead
  • Embrace the complexity — Rather than try to fire a non-existent “silver bullet”, we should accept that an intervention may have unintended consequences in one or multiple areas and discuss how we may test our hypotheses as early as possible
  • Diversify your measurements — Allow for use of qualitative and quantitative data to measure and evaluate, and don’t seek to find just one metric. Allow for the time and effort required to do this when designing the project
  • Welcome independent analysis — Engage external organisations and partners to ensure you have not reached biased conclusions
  • Remember the context — Get an understanding of the restrictions on data quality in a local context and account for them in your qualitative and quantitative analysis.
Build, Measure, Learn — An approach to innovation that can also be applied in evaluation

Finally — given our shared goals, why don’t we make life easier for us all and share with each other our findings, successful or otherwise? If a project failed to get off the ground or was abandoned or never scaled, wouldn’t it be helpful for others to know why? If it’s a success, how was it implemented? How was this measured? This is something we are committed to sharing here on our Challenges publishing portal.

The Department for International Development (DFID) set an important precedent here in the Impact Evaluation delivered on The Millennium Villages Project in Northern Ghana, which gave aid to 33 villages. The decision was made to stop the project after this evaluation, so funding could be allocated to more impactful projects. DFID said: “It is only through thorough evaluation that we can ensure aid works,” adding that “work must continue to evaluate, measure and improve the value of projects we fund”.

So, yes, we need to measure and evaluate. But let’s be pragmatic about it, and think about the mechanics of how this works, and how people are engaged to implement best practices.

In our next article we’ll outline a few examples of organisational best practice to help achieve this.

Please email us at Challenges if you want to further discuss monitoring and evaluation and how your business could benefit. Or continue the conversation below or on our Social channels, Facebook, Twitter and LinkedIn.

--

--

James Wilkinson
The Challenges Group

Working at Brink as part of DFID’s Frontier Technology Livestreaming Programme. Pragmatic about tech, fascinated by behaviours.