Why AI Needs (Bio)ethics, Part II: A Quick & Dirty History of Bioethics

Monika Viktorova
The Startup
Published in
6 min readApr 27, 2020
Photo by Jacob Boavista on Unsplash

Experimental medicine began developing as a discipline at the turn of the last century — but it wasn’t until after World War II that anyone started seriously considering the ethical ramifications of experimenting on humans. Patient consent was often an afterthought in the early days of ExMed (and regular ol’ doctorin’ as well — I won’t get into the annals of paternalism in medicine but if you want to horrify yourself, google the ‘husband stitch’). Of course, the result of performing procedures on human beings that were neither asked for nor explained was suffering, trauma and other unintended downstream effects.

This wasn’t immediately obvious, or particularly alarming, to doctors or the public, however. Medical science had to become sophisticated enough to be truly dangerous to people for the world to (rightfully) panic and want to do something about it. So how, and importantly, why, did things change? Read on for a quick and dirty history of experimenting on humans and the resultant rise of bioethics, told in 3 parts.

Part 1) The 1947 Nuremberg Code

When the horrific experimentation performed by Nazi physicians on prisoners in concentration camps was uncovered after WWII, it became a global scandal. The trials of 23 Nazi physicians responsible took place alongside the Nuremberg Trials of other German war criminals and led to the creation of the Nuremberg Code in 1947. Containing 10 principles for experimentation on humans, the code emphasized consent, safety and the balance of risks and benefits. Groundbreaking for its time, the Code became a stepping stone to a growing field of philosophy on the ethics of medicine and medical research: bioethics.

Part 2) The Belmont Report

Unfortunately, the United States Government took the Nuremberg Code more as a suggestion than a serious standard. While they had publicly condemned the Nazis and Germany for inflicting suffering on innocent people in the name of research, they were quietly conducting their own medical experiments on vulnerable or imprisoned populations, often without consent. One of the most damning examples was the Tuskegee study, a 40 year long experiment conducted by the US Public Health Service on the effects of syphilis in a population of impoverished black men in Alabama.

From 1932 to 1972, the men in the study, more than half of whom had syphilis, were told they would receive free medical care, meals and, ominously, burial insurance. They were not, however, told they were in a research study on syphillis, nor that they had it. Researchers actively lied to the participants, and, most damning of all, they did not treat their condition, watching them deteriorate and in some cases die. Despite the fact that penicillin became known as an effective treatment for syphilis in the mid 1940s, the participants remained untreated until the end of the study in 1972. The experiment only ended because a whistleblower leaked it to the press, demonstrating to the US public that the horrors of unchecked medical experimentation could in fact happen anywhere.

In response to the public outcry over the Tuskegee study, the National Commission for the Protection of Human Subjects of Biomedical and Behavioural research was convened. Between 1974–1978, the commission worked to create standards that would guide medical research in the United States. The resulting Belmont Report proposed 3 principles for research on human beings: respect for persons, beneficence and justice. These three principles became the cornerstones of bioethics, although scholarship continued to evolve the thinking on what “ethical” research really meant. I won’t go into details on what the three principles mean here — but if you’re curious, have a google or look through the interview transcripts of the original commission members here.

Photo by Frederic Köberl on Unsplash

Part 3) The Universal Declaration on Bioethics and Human Rights (UDBHR)

Much like digital technologies, advances in medicine relied on research practices that grew beyond national borders. Scientific collaboration globalized in the latter half of the century, meaning that a single research study on humans could be conducted in multiple countries simultaneously. A lack of comparable research standards — or sometimes, their total absence amongst participating research sites — necessitated the adoption of a global standard.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) took it upon itself to create that standard. To bridge the ethical gaps in research practices worldwide, it made two important choices:

  1. It inextricably linked bioethics to human rights, and as a result,
  2. It grounded the standards in the Universal Declaration of Human Rights

This ensured that the standards would be recognized as flowing from the most basic rights that should be afforded to every human being, no matter where they live or who they are. After two years of consultations and debates, UNESCO released the Universal Declaration on Bioethics and Human Rights (UDBHR) in 2005. The ambitious document includes 15 principles for conducting ethical research on humans, building on the preceding thinking in the Nuremberg Code and the Belmont Report.

The UDBHR calls for human dignity, human rights and fundamental freedoms to be respected. It demands that research be carried out only with the prior, free and informed consent of the participants, and cements that someone needs to understand, at a basic level, the fine print of the study they’re signing up for. The Declaration suggests that scientific research (even research that is explicitly for commercial purposes) should broadly benefit society, and that benefits to participants should be maximized while potential risks minimized. It calls for the protection of participants’ privacy and for the explicit consideration of how the research might impact future generations. Through its 15 principles, the Declaration creates a broad and aspirational code of ethics for research on human beings.

Although the principles of the UDBHR might seem lofty (what does respecting human freedoms look like if your study population consists of pre-breakfast toddlers?), it achieves a couple of bigs wins:

  • It’s not just theoretical, it’s also tactical: A further 10 articles in the Declaration give some specific guidance that can be used as a tactical how-to, demystifying some of the broad language of the principles
  • It acts as a global baseline standard: even if it’s not needed in those jurisdictions that already have strong guard rails in place, it extends those guard rails to researchers who may need to rely on them to push back on or call out unethical practices
  • Because it is the first international set of standards adopted by governments globally, it carries compelling authority despite not being legally binding

The UDBHR was the culmination of 60 years of trial and error that saw reckless disregard for human life, dignity and happiness. With the rapid progression of medical science, the rules will continue to evolve. What remains constant is the lesson we’ve learned the hard way: you have to think about ethics at the outset.

So what does this mean for ethical AI? If we think about our interactions with AI as a constant form of experimentation (where AI is the scientist and we’re the participants), how can we use the history of bioethics to help us avoid some of the historical pitfalls we saw in medical research?

Stay tuned for Why AI Needs (Bio)ethics Part III

Disclaimer: Views here belong to me and do not represent those of my employer.

You can follow me here or find me on Twitter @mviktoro

--

--

Monika Viktorova
The Startup

Tech strategist who grew up reading too much sci fi and stays up thinking about ethics and AI way past her bedtime. Views my own