DRINKING THE POISONED WATER: Monitoring and Evaluation in today’s Afghanistan

Alcis
Alcis Stories
Published in
8 min readNov 24, 2015

--

USIP report on effective M&E in conflicted-affected environments, November 2015

Over the last few years conducting Monitoring and Evaluation (M&E) in Afghanistan has come to feel like drinking from a poisoned chalice. This was not always the case. Now, taking on an M&E assignment requires considering whether the donor truly wants to know the impact their program is having on the population. For the evaluator it is essential to clarify beforehand whether the donor will support a methodological approach which acknowledges that ‘lessons learned’ come as much from understanding what did not work as what did; and whether they recognise, with the advent of new technology, that there are better ways to ascertain the impact of development efforts. Critically, donors must come to accept that the ubiquitous polling that has, thus far, helped (mis)shape the state building project in conflict affected countries like Afghanistan, has left donors blind to the unfolding crisis in the very areas that they looked to have greatest impact.

Collective Madness

Afghans with the unenviable task of collecting data in rural areas have long suspected that donors have a strong preference for positive accounts of the impact of their programs. Project implementers and M&E specialists express significant challenges with ‘speaking truth to power’ and informing donors about the downsides of their programs, especially those considered ‘flagships’.

Armed with a proverb for any situation, Afghan colleagues recount the fictional former King and his administration’s response to learning that the water supply in the country was poisoned, turning the population ‘mad’ when they drank it. Realising that the population was crazed, the King approached his closest confidantes for advice. Eager to preserve the royal court, the advisers suggested building a reservoir containing clean water for the exclusive use of the royal quarters. The King followed the advice, believing that everyone’s best interest lay in his administration being spared the madness that had swept over his people.

However, the next time the ‘sane’ King addressed his subjects they no longer understood him. He believed them to be mad but they saw the King in exactly the same way. To the population the King made no sense whatsoever. They called for his removal and for a replacement to be found; a King that was intelligible and could truly represent their interests. Under threat, the King returned to his advisers: ‘The people are mad, they say they do not understand me, they are calling for me to be killed, and for a new King to be appointed! What should I do?’ His advisers pondered for a while, or so the story goes, before deciding that their best course of action was to join the fray: ‘we too should drink the poisoned water’.

Just give us the good news!

This tale reflects the madness that many in the field succumb to because of the bias for ‘good news stories’ by donors and the institutions responsible for the delivery of aid. Pressured by a relentless media in search of bad news and evidence of waste — the ‘gotcha moments’ — it sometimes seems as if the donor community has retreated behind the intellectual equivalent of the blast walls that surround them in Kabul, as they actively avoid collecting data on what is really happening on the ground in rural Afghanistan for fear of being publicly lambasted and losing their funding.

Indeed, there have been a number of occasions where donors have openly talked of the need to contain the bad news that they believe might be uncovered by an evaluation. One such occasion involved the review of an agricultural input distribution program. On learning that this program was to be evaluated — a program that was known to have a number of significant problems — an official with the principal donor expressed exasperation at why such a review was commissioned at all: ‘What were they thinking of back at headquarters?!’ Realising that the evaluation was in progress and could not be stopped, this official expressed the hope that those commissioning the review had the foresight to draw up such tight Terms of Reference that none of the problems would be exposed. He exclaimed: ‘After all, if you don’t want to know what underwear your mother wears don’t look in her chest of drawers!’

Most recently, the independent review component of a major development program was terminated early because the impact assessment data generated at the inception phase challenged the veracity of the information produced by those responsible for project implementation, including the recipient government’s own M&E unit. Faced with confronting the aid recipient’s management unit about their inflated accounts of program outputs and impact, the donor simply chose to shut down the independent M&E component and commissioned an alternative review. The subsequent review employed surveyors conducting fieldwork alongside government staff and the NGO responsible for implementing the project to collect attitudinal data on what farmers thought of the program. It argued for the expansion of the program, in stark contrast to the previous review that used multi-dated, high resolution satellite imagery, GPS photography and detailed research, to reveal that many of the original development investments had deteriorated significantly, some had been abandoned altogether and the impact on the wider economy was negligible.

A methodology that pleases

Methodology is a large part of the problem. Attitudinal surveys that dominate performance measurement and M&E in Afghanistan at the program and strategic levels have become ubiquitous measures of ‘progress’ for the international state building project.

These surveys are typically subject to significant bias. The results are skewed by large gaps in the population data and insecurity, which results in the interviewing of communities that are more likely to be pro-government and pro-assistance. The methodological approach also relies on enumerators collecting data on what Afghans ‘think’ of the assistance rather than identifying what they do with it, or measuring how it subsequently changes their lives, for better or for worse.

Central Helmand; red dots showing locations from the settlement data set most widely used for surveys and polling and black polygons showing the locations of Alcis settlements (10 or more compounds in close proximity). Yellow dashed areas show greatest mismatch.

Furthermore, these kind of large scale surveys present respondents with closed questions so as to make the tabulation of data easier, and pursue lines of inquiry that are derived by what donors want to know, not necessarily the experiences and challenges that Afghan communities wish to talk about, the substance of which makes for more effective evaluation processes. The subjectivity bias that the current M&E approach entails means we are at best measuring the perceptions of aid recipients in secure areas. But more often than not we are simply collating what Afghans in areas with strong linkages to the state, and who have been the primary recipients of development assistance, think we want to hear.

And now that development assistance is big business, there are numerous commercial companies well aware of the donors’ need for ‘good news stories’ and are willing to provide the data to support such favourable claims, which in turn keeps the money flowing. Some of these businesses are also implementing development programs themselves and therefore subject to their own vested interests. Armed with rafts of data from these attitudinal surveys and backed by claims of ‘representative samples’ of program ‘beneficiaries’, these companies provide messages that are the development equivalent of cat food adverts that assert ‘8 out of 10 cats prefer it’.

Unfortunately the combination of this desire for ‘good news’ and commercial entities that are keen on peddling the party line results in our being partially sighted when it comes to the impact development programs are having on the populations of countries like Afghanistan; and more importantly we are blind when it comes to knowing how to improve programs in Afghanistan, and other conflict affected states, in the future.

Improving accountability as well as knowledge

Yet, in this era of new technology with affordable high resolution imagery there are ways to find out what has become of the development assistance provided and whether it has contributed to improving lives and livelihoods in both secure and insecure space. As a recent report published by USIP shows, it is now possible to determine for example whether the saplings that have been distributed are bearing fruit or have been cut down; the irrigation works paid for have facilitated the cultivation of a wide range of horticultural crop, or are surrounded by opium poppy; and identify whether the greenhouses established are being used as intended or have been abandoned altogether. Combined with well-focused fieldwork, this approach can tell us not only what happened to development monies, but why there were successful outcomes in some areas while failures in another.

Example of imagery analysis showing changes in land allocated to orchards in Nangarhar, 2007 through 2010 to 2014

Imagery analysis is clearly not a universal panacea. The use of high resolution imagery cannot be used for measuring all development outcomes — there are complex social outcomes such as shifts in political participation, attitudes to the state and conflict resolution that do not produce direct physical outcomes. However, the integration of Geographic Information Systems (GIS) offers a valuable contribution to the portfolio of M&E strategies for conflict affected states. When combined with field research, it certainly offers something more than polling and the kind of generic data — the image of an ‘average’ Afghan, Kandahari, Nangarhari, or Wardaki — that has so beleaguered our understanding of the complex and dynamic political topography that is so common in countries like Afghanistan.

In particular, locating data within the particular geographical space where it is collected generates a deeper knowledge of the areas where donors are funding programs, gives greater insights into the different population groups that reside there, and compels donors to challenge the assumptions that underlie their investments, especially when supported by imagery that shows how their monies are being used and what effect they had on the ground.

It is certainly clear that we have to move beyond current methods of M&E that look to exclude anything that does not have a positive spin. We also need those conducting M&E — many of whom proclaim independence — to have the courage to both collect and convey data that their clients may not always wish to hear. To continue to acquiesce and drink the poisoned water remains a sure route to madness as well as a waste of scarce development funds — both are outcomes that we all should wish to avoid.

Expansion of agriculture North of the Bogra canal, Helmand, 2002 to 2012

David Mansfield

Visiting Scholar, Columbia University and author: ‘A State of Built on Sand: How Opium Undermined Afghanistan’

--

--