“Do no harm” is an inadequate moral compass — here’s a more radical alternative

Arbie Baguios
Aid Re-imagined
Published in
8 min readJul 13, 2020
The Hippocratic Oath (Source: Health Times)

One idea that drives contemporary environmental justice movements is the right of future generations. This idea recognises that what we do for the sake of “development” today — for example, manufacture products or invent technologies — have an environmental impact and can lead to adverse outcomes in the future.

The idea’s main message is this: our actions right now — even if they have altruistic motives — could have detrimental and potentially irreversible effects for the people of the future. Therefore, such an action, even if undertaken for the benefit of the present population, would have done more harm than good in the final calculation.

This is something we seldom think about in development and humanitarian work — our impact on future generations.

Aid-as-we-know-it has a narrow — and at its worst, arrogant — understanding of impact. We make our decisions based on their supposed impact; but unfortunately, we assess our impact using lamentably inadequate tools. The logframe is one tool that has been long chastised for its linear view of the world: it asserts that if you do X, then Y should follow — even if, in our non-linear reality, it rarely ever does.

But there is one other tool we use to justify our actions and decisions that almost never gets scrutinised: the concept of “do no harm.”

The origin of the phrase “do no harm” (DNH) is debated — most attribute it to the Hippocratic Oath, which includes the phrase, “to abstain from doing harm”; although a detailed investigation in 2013 suggests it came from the physician Thomas Sydenham, also known as “the English Hippocrates,” who was quoted using the phrase in a book published in 1860. DNH has since been a core principle in medicine.

DNH found its way to the humanitarian sector in 1999 via the work of Mary B. Anderson, who exposed how aid, despite its good intentions, can exacerbate conflict. Today DNH is used in both development and humanitarian contexts.

In making high-stakes decisions (for example, in the justice system or in medicine), the decision-makers are morally obligated to demonstrate procedural justice. And DNH has, in many ways, become the moral procedural framework — that is, the moral compass — used by aid workers in making decisions.

But I believe DNH as it is currently understood is flawed.

It has two faulty assumptions: firstly, it assumes that an action will (and should) be taken; and secondly, it assumes that mitigating any unintended negative consequences is guaranteed.

In a one-of-a-kind evaluation of DNH, the consultancy F3E looked at the ways one aid organisation, Humanity & Inclusion (H&I), applied the concept. For H&I, DNH is reflected through their charter which states, “In carrying out our actions we are determined to do no harm.” The report also found that H&I staff — and, I suspect, most in the sector, too — understand DNH as “taking a step back…to think before they act.”

The first assumption is obvious here: that taking action is a given.

Indeed the report seems to accept a dominant norm in our sector: that to “do nothing” is not an option. According to Anderson, the originator of DNH in humanitarian aid, to do nothing is “morally unacceptable”; according to LSE professor David Keen, doing nothing “results in a great deal of harm.”

The scholar Fiona Terry seems to have a more tempered approach: in her view, “it is impossible not to ‘do harm’, so we need to accept it and focus on minimising negative effects.”

But this framing reveals the second assumption: that minimising harm can be guaranteed.

Most donors see harm this way, too: as something to be minimised. Proposal and report templates often ask, “What negative unintended consequences might arise as a result of your action? And how do you plan to mitigate them?”

Aid agencies rarely get questioned, “But what if your mitigations do not work?”

Such line of thinking (as argued by the likes of Anderson, Keen and Terry, who wrote their books based on aid work from the ’80s to the early ’00s) may have been acceptable back when humanitarianism was a relatively nascent industry — at a time of cowboy humanitarians from the Global North, many of them in their early 20s, descending into conflict zones; back when they didn’t know any better about the full impact of their actions.

But today we do know better.

Recent rigorous evaluations of foreign intervention over the years are now conclusively showing that development and humanitarian work are just as likely to cause more harm than good. Upon synthesising the available evidence, Harvard economist Nathan Nunn concludes:

“We may have our largest and most positive effects on alleviating global poverty if we focus on restraining ourselves from actively harming less-developed countries rather than focusing our efforts on fixing them.”

Here’s an even more specific example: a paper published in 2020 in a top economics journal shows how an NGO’s health intervention in poor rural communities resulted in the decline of government-provided health services, which further led to “an increase in infant mortality.”

In light of this fact, to say that “‘doing nothing’ is morally unacceptable” does not hold water. It just reeks of white saviourism.

“Do no harm,” the moral compass of aid-as-we-know-it, must be revised.

No wonder the F3E report states: “The review identified a shortage of methods and tools to help people make strategic decisions on DNH.”

To fix this, I humbly propose a new moral procedural framework: we can call it choosing to act carefully(CTAC).

Unlike DNH, “choosing to act carefully” does not assume that taking action is a given nor mitigating unintended consequences is guaranteed. It proposes three steps that demonstrates greater diligence in decision-making than simply “doing no harm”:

  1. Precaution
  2. Preparation
  3. Pragmatism

Precaution means applying what is known in public policy as the “precautionary principle.” There are varying degrees and ways as to how this is applied (strong vs weak; from “non-preclusion” to “prohibitory”). There isn’t yet an exact prescription on how to use this within the aid sector, but perhaps we could learn from the environmental justice movement. The Earth Charter, written by an international body of environmental activists and endorsed by the UN, states:

“When knowledge is limited apply a precautionary approach…Place the burden of proof on those who argue that a proposed activity will not cause significant harm, and make the responsible parties liable for environmental harm.”

Imagine aid agencies being liable for the inadvertent harms they may cause. How could that change their (our) decision-making calculus?

Preparation is shorthand for preparing to minimise the harm and maximise care. In philosophy there is a concept called minimax, which means minimising the loss in a worst-case scenario. Preparing to minimise the harm and maximise care can be understood as an ethical minimax. It offers a more precise course of action — minimising harm especially in the worst-case scenario — than just a general directive to “mitigate unintended consequences.”

By shining a light on the potential worst (not just negative) outcomes, it forces aid agencies to answer the question, “But what if our mitigations do not work?”

Pragmatism is, of course, acknowledging that, yes, sometimes Anderson and Keen and Terry are right: doing nothing could cause harm. Pragmatism weighs the potential harms primarily against two factors: the degree/urgency of the need; and the constraints of the context. This recognises that precaution may not always be the right choice in the face of great suffering; and preparation is only as good as time, resources and other contextual constraints permit.

Crucially, pragmatism comes as the third and final step of CTAC. This emphasises the “care” before the “choosing to act.” It especially moderates the prevailing humanitarian culture that thrives from the seemingly macho attitude of leaping into action, with its “rapid responses” and “surge teams” — which feels like remnants of its cowboy past.

I would argue “choosing to act carefully” — through its three steps of precaution, preparation and pragmatism — offers a better moral compass for decision-making in aid than “do no harm.”

(CTAC is part of the Aid Re-imagined model — to find out more, you may read the full working paper here.)

How then could we operationalise CTAC?

In medicine, decisions are informed by a measure known as Quality Adjusted Life Years or QALYs.

A QALY is derived using the formula: length of life x quality of life. The quality of life variable can be determined in a participatory way by surveying how patients themselves define quality. Therefore, 1 QALY means one year in perfect health. So for example, if chemotherapy could extend your life by 1 year, but it could only do so with 50% of good health (because you suffer sever side-effects), then that treatment is 0.5 QALYs. Having that information helps doctors and patients decide whether or not to go for the treatment.

Health policy-makers also use QALY to make decisions. For example, in the UK, the NHS’s decision on whether or not to invest in the development of a new drug is informed by the drug’s cost-per-QALY: a drug that, say, costs £50,000 to give half a year in good health means it costs £100,000 per QALY. The NHS will typically fund drug development if it offers cost-effectiveness per QALY

The concept of QALY recognises that a drug can have negative side-effects, and it also accepts constraints (limited resources usually mean only drugs that provide QALY cost-effectiveness will be funded).

What if we applied the same concept in aid?

One idea is by calculating what we could call an intervention’s Quality Adjusted Future Impacts or QAFI.

If the intervention alleviates current poverty or suffering but comes at the cost to people’s and the community’s future (like the example above where the NGO health clinics damaged the local health system), then that intervention would be <1 QAFI.

If the people and communities participate in the calculation, then they can determine for themselves whether or not they want such an intervention. This could also add nuance to Value-For-Money: if, for instance, the intervention is 0.3 QAFI for £3 million pounds, then perhaps that project is not worth investing.

It sounds like a crazy idea. The comparison with medicine is, of course, not 1:1 and there are still a lot of details to be worked out. But if the NHS, which arguably makes more difficult matter-of-life-and-death decisions than most of the aid sector, chooses to utilise (and continuously refine) a comparable framework, then why shouldn’t we explore this option, too?

Our actions as aid workers not only impacts present populations, but also future generations. Our interventions today could damage local people and systems tomorrow or in ten years’ time. Our current tools like “do no harm” seem woefully inadequate to address this issue.

To fix this, we can take inspiration from the advances in environmental justice and medicine. By “choosing to act carefully” and considering our future impacts (through ideas like QAFI), we can re-imagine aid.

Aid Re-imagined’s mission is to help usher the evolution of aid towards effectiveness and justice through deep, radical, and evidence-based reflection and research — unafraid to venture beyond the realm of development and humanitarianism, using insights from philosophy, economics, politics, anthropology and sociology, as well as management. Aid Re-imagined stands for a more effective and just aid for our new, ever-changing world.

Follow us on Facebook, LinkedIn, and Twitter.

--

--

Arbie Baguios
Aid Re-imagined

Arbie Baguios is the founding Director of Aid Re-imagined. He is currently a doctoral researcher at the London School of Economics.