The Zeroth Principle of Bioethics

I was recently whacked upside the head by the difference between having all the data and knowing the facts.

I’ve spent the last two years studying Machine Ethics and the related fields of Machine Learning and Biomedical Ethics, and one thing has become painfully clear: We—meaning society as a whole—need to develop and adopt a core set of ethical principles that will allow us to integrate the products of the high tech and AI revolutions into society in a way that works for us all. In this article I will deal with one of the ethical realms with most pressing needs: Biomedical Ethics. To do so, I will first lay out the generally accepted Four Principles of Biomedical Ethics, and then introduce what I see as the requisite new principle, and why I see it as being crucial. Finally, I will offer a personal perspective that illustrates both the principle and why it is so important. In part, this article returns to a discussion that I started last year in my article “On ethics and humanizing intelligent machines”.

The Four Principles

The fields of medicine, medical research and the like are amongst those most closely tied to ethical questions, and driven by formal codes of ethics, and ethical review. As such, biomedical ethics is one of the fields explored and referred to by those interested in the ethical questions raised by artificial intelligence, machine learning, and devices that interact with us as if they were human.

The most influential system of biomedical ethics for the last three or four decades has been what is known as the “Four Principles”. The four principles, their use and implications were first set out by Tom Beauchamp and James Childress in their 1979 book, Principles of Biomedical Ethics. The book is now in its seventh edition, and while structurally the book remains much the same, the content has been seriously updated. The four principles that it defines are: respect for autonomy, non-maleficence, beneficence, and justice. Let us review them, though not quite in that order.

If the average layman knows anything about biomedical ethics, it is the line “first, do no harm,” often cited as being part of the Hippocratic Oath, although those words don’t actually appear in either ancient or modern versions of the Oath. Nonetheless, the principle that it represents—non-maleficence—has been a fundamental value throughout the history of Western medicine, from Hippocrates to the twenty-first century, and is the second of Beauchamp and Childress’s four principles.

Of course, the most obvious goal of medicine and health care, is to not merely do no harm, but to accomplish actual good on the patient’s behalf. This is the next of the four principles: beneficence. One of the key things to note once we have both non-maleficence and beneficence is that there are trade-offs between them. Many, perhaps even most, medical treatments and procedures carry the potential of some sort of harm, traded off against the intended good consequences. The need for trade-offs and judgement calls between the demands of the different principles is a hallmark of principle-based ethics in general, and particularly of biomedical ethics.

The fourth principle of bioethics as defined by Beauchamp and Childress is justice, the requirement biomedical goods and services be distributed fairly. This principle is at the heart of much of the politics around health care, and I hope I will be forgiven for giving it somewhat short shrift in the context of this article and my proposed additional principle. Ultimately, should the new principle be taken seriously, there will be real trade-offs and interactions with all of the existing principles, including Justice, but for now, let us just note that justice is the fourth principle.

This brings us around to the principle that I skipped at the beginning of the list: respect for patient autonomy, Beauchamp and Childress’s first principle. It’s worth noting that by “first”, I (and they) mean first in the order of presentation, and not in terms of importance. It may be traded off against the other three, when they come into conflict. Still, it is an important principle that has been much honored in professional ethics in the last several decades.

According to this principle it is important for individuals, medical patients and test subjects, both to be free to choose for themselves, and to be well enough informed to make that choice. They should not lose control of their lives merely because the doctor says so, or “knows best”. Beauchamp and Childress are careful not to talk in terms of absolutes, but rather describe respect for autonomy in terms of “a substantial degree of understanding and freedom from constraint, not a full understanding or complete absence of influence”.

Autonomy is a complex issue. Patients want the medical professionals who treat them to be experts, to know what will cause the least harm and do the most good. Still, they have their own priorities and deserve to be allowed to make their own choices. In order to do so reasonably, they need to be adequately informed, and to understand the implications of those choices. Beyond that, there is the issue of competency. Children, and those whose capacities are diminished by infirmity, disability, the very conditions they are being treated for, or the effects of those treatments, may lack the judgement, awareness, or understanding required for an informed decision or consent. The judgement of practitioners, family members and surrogates may have to be balanced against or complement that of a patient.

This brings us to my new principle, the requirement that relevant embodied, physical, biological experience must be involved in the making of judgements and decisions. Just as patients must deal with the knowledge and expertise of practitioners, both patients and practitioners are increasingly faced with the information and analysis provided by artificial intelligence and other products of high tech.

The same sorts of Machine Learning (ML) techniques that allow a Go-playing system to emulate and even surpass the skills of a world champion human player, can also be applied in the realm of medical diagnosis. An ML system can now be created, fed millions of case histories or observations and come up with a diagnosis that rivals that of a human expert, who is only familiar with thousands of relevant cases. As such abilities increase, doctors and patients may feel coerced into accepting the machine’s diagnosis or recommendations. It is, however, important to realize that these systems don’t even know what illness or pain is. They have never experienced them; never been sick, in pain or afraid; never regretted a mistake, nor had to deal with the truly unexpected. They have not had, cannot have had, experience, and experience is required for sound judgement.

What, then, is meant by “experience” in this context? I propose that it includes three levels: It must be embodied, physical and biological, in order to be fully commensurate with the needs of bioethics. Here’s what I mean by each:

I take the term “embodied” from Jeff Hawkins. In his talk, “What Is Intelligence, that a Machine Might Have Some?” (slides may be found here), he defines “Embodiment” in terms of existing in some world or domain, not necessarily the physical world; of having sensors that measure that world; built-in behaviors that affect that world; emotions, drives or motivations; and an episodic, spatial memory of events and actions that occur in the world.

An AI could be “embodied” in this sense, if it inhabited the “world” of high finance, monitored the stock market, business news and current events; was capable of buying and selling stocks, bonds, futures and the like; was motivated by profit, and risk management; and could remember its actions and the events in that world.

Similarly, an AI might be seen to be embodied in the world of medical diagnostics, if it had access to not only a huge repository of medical data and resulting diagnoses, but could also recommend diagnoses and treatments to doctors, track whether the doctors accepted those recommendations and see how the cases turned out. Merely analyzing data on a case and offering a diagnosis without interactive feedback on how each diagnosis was accepted, acted upon and what the results were, would not suffice, by this definition of “embodiment”.

Inhabiting the realm of medical diagnosis and treatment, however, only qualifies as the most abstract form of embodiment, and should be viewed as embodiment only in the most technical sense. The next level of embodiment, “physical embodiment” more closely matches the everyday sense of the term. An AI, a robot, equipped with visual and audio sensors, and able to move in the physical world, can be seen as physically embodied. Such a robot must have at least a rudimentary “understanding” or model of the physical world inhabited by concrete objects, some of which are immobile, some of which can be moved and some of which move themselves.

There are, of course, many degrees of sophistication of embodied AIs, from the more advanced versions of the Roomba to driverless cars, to systems like Boston Dynamics’, Spot, Big Dog, Atlas etc. or Honda’s Asimo. Fully autonomous robots will not exist for many years, or probably decades. Still, we are already seeing devices with rudimentary physically embodied intelligence, and so it is not beyond possibility that AIs with embodied intelligence will soon be operating in medical, healthcare, or biological research environments. Michael and Susan Leigh Anderson, for instance, have experimented with care-giving robots. These systems have only a minimal amount of either medical responsibility or physical experience, but they do indicate that the potential is on the horizon.

This brings us to the term “relevant”. Merely being carried around by a mobile physical system does not grant an AI physically embodied experience, unless the same AI is navigating the physical world and doing the biomedical analysis or formulating the biomedical advice, or performing the biomedical service. In order to fulfill this requirement, the experience of the physical world must inform the AI’s biomedical process.

Mere physicality is, I would argue, still not enough. Being able to navigate in the world, distinguish self-moving or living things from the immobile and inanimate is not sufficient experience for an intelligence to understand the need for autonomy, the experience of pain and disability, nor the benefits of treatment. These things, which are all prerequisites to understanding the Four Principles, and their importance in treatment require some form of biological experience.

For instance, when balancing non-maleficence with beneficence in near-end-of-life decision making, a physician may be faced with balancing the extending of life with minimizing pain. Beauchamp and Childress write:

The use of life-sustaining treatments occasionally violates patients’ interests. For example, pain can be so severe and physical restraints so burdensome that these factors outweigh anticipated benefits, such as brief prolongation of life. Providing treatment may then be inhumane or cruel. Even for an incompetent patient, the burdens can so outweigh the benefits that treatment is wrong, not merely optional.

In order to make such a judgement, it really is imperative that the decider understand the concept of pain, an understanding firmly grounded in biological experience.

When we start talking about the possibilities of artificial intelligence with biological experience, we are pretty well into the realm of science fiction rather than emerging technology. Still, I will temper my requirement by saying that it is probably sufficient for an AI to have experience based upon a reasonable facsimile of life rather than solely DNA-based living tissues. One can envision lifelike androids that fullfil the requirement. Nonetheless, any truly responsible bioethical judgment would seem to require a basis in biological (or quasi-biological) physical embodiment.


So far, I have been approaching this issue in an abstract fashion, and largely from the perspective of biomedical ethics dealing with the advent of AI and ML technologies; but as I was working on it, the importance of physically embodied biological experience, human experience, even today, was brought home to me in a very personal way. As a result, I would like to shift gears and address this question from the perspective of human experience—my human experience. I do so, cognizant that academic objectivity, rather than subjectivity, is often regarded as the norm. Still, it seems appropriate to rely on experience and judgement in arguing for the value of experience informing judgment with regard to technical input.

Even without Artificial Intelligence, tech needs to be judged by experience. This scan of my recent hematoma brought this point home for me.

In mid-May, I developed an idiopathic acute subdural hematoma, that is to say an unexplained bleeding between the membrane that surrounds the brain and the brain itself. It was a serious and even life-threatening condition, one that caused the Emergency Room doctor at my local suburban hospital to immediately transfer me to the Emergency Department at Mass General. My case was complicated by the fact that I also suffer from Atrial Fibrillation and was therefore on a blood thinner for which there is no approved reversal agent. This caused me to become a subject for a new drug trial, so I quickly found myself the focus of a very large team of emergency, cardiology, neuroscience, neurosurgery, medicine and research doctors; the subject of multiple tests, including CT scans and a battery of neurological and cognitive function tests.

My tests told two somewhat conflicting stories. The function tests showed no degradation in function at all, and the scans showed a very serious bleed and pressure on the brain. The contrast was brought into clear focus when a doctor carrying a number of documents entered my room, where I was having a discussion with a couple of researchers regarding my thoughts on machine and biomedical ethics. He excused himself, turned around and left, only to come back a minute or two later. He explained that he’d been certain he had entered the wrong room, as the patient who was shown in the scans he was carrying clearly shouldn’t be operating at the level of the discussion he’d just heard. I explained that what had triggered the whole discussion was another doctor’s comment on the distinction between treating the scans and treating the patient.

While I was in first, the neuroscience ward of the ICU, and then the regular neuroscience ward, every time a doctor or nurse came into my room, they submitted me to a battery of verbal and physical tests to determine whether I was suffering neurological or cognitive dysfunction. None was ever seen, yet the scans all showed a major bleed, with very high pressure on my brain. In fact, even after two weeks, when they finally decided that I had to have an operation to drain the blood and relieve the pressure, there were still no signs of dysfunction.

In this instance, there was no AI or ML involved, just the high tech of 3D scans of the brain, showing my skull and its contents. Even the relatively simple technology of a CT scan had a coercive power. The clear diagnostic meaning of the scan was “this man needs a craniotomy—now”. Talking with me, testing my reflexes, my memory, my abilities to feel, move and reason all said “this man is in perfect health”. Treating the scan, I would have gone under the knife. Treating the patient became more involved, observing closely, delaying action while the body dealt with the problem. My doctors and I decided to go with human judgement, based upon experience. The operation was delayed two weeks, allowing a much less extreme technique to be used, and I was watched, and watched myself, closely. When the decision was made to change from being guided by the (lack of) symptoms to the dramatic contents of the scans, it too, was because the doctors’ experience told them the risk of inaction now was just too great.


The practice of medicine, and of medical science, has always been based upon human judgement, the judgement of practitioner and patient or subject, judgement based upon human experience. What else was there? Now, we are seeing the advent of more and more sophisticated technology, technology that can see what we cannot, that can analyze what we cannot, technology that can passively or actively advise. My CT scans were telling a story, were advising—in the smallest way—a course of action. Emerging technological advances gives an increasingly powerful voice to technology’s advice. We have always respected the value of biological, physically embodied experience. Through informed consent and respect for patient autonomy we have balanced the experience of patient and that of practitioner. Now, however, as we create systems that analyze, diagnose and advise us, with little or no embodied experience, the time has come to codify this respect for human experience, at least until the time that our creations are capable of embodied, physical and biological experience, and sound judgement informed by that experience.

This is what I call the “Zeroth Principle of Biomedical Ethics”, one that we have had all along, that has underlaid our thinking and our practice, but which we have never given full voice to. As our machines become more powerful and skilled, we need to make it explicit. Machines, intelligences without full human experience—experience grounded in physical, biological bodies—must not be imbued with decision making capability and authority, until they are capable of judgements based upon experience that approximates our own. This is particularly true in the realm of bioethics, where our bodies are the direct subject matter, but it is also true in other realms, realms where AI is coming into play. I have previously argued that much of the attention given to the “Trolley Problem” is misplaced and even harmful. One of the reasons is that toys like the so-called “moral machine” assume that AIs can make value judgements distinguishing classes of humans—the fit, the elderly, the law abiding, the criminal—distinguish professions and backgrounds, all without any real experience. In fact, they can not.

And so, the Zeroth Principle of Biomedical Ethics:

“Judgment based upon embodied, physical and biological experience must not be sacrificed to the power, efficiency or allure of technological tools.”