The Startup
Published in

The Startup

How Ethical Is the Future of AI Healthcare?

When informed consent, artificial intelligence and bioethics hold a complex conversation.

Photo by jorien Stel via Pexels

If a patient is unable to say “no” to a life-saving procedure, should their life be extended even if the prospects are poor?

If an infant has no hope for survival, should their organs be donated to someone else who does?

Can artificial intelligence (AI) accurately predict disease outcomes across race, gender, and socio-economic lines?

These questions might look like the biggest of grey areas, but bioethics might just have the answers.

But what is bioethics?

In the 1960s, revolutionary developments in healthcare like organ transplants, artificial ventilators and prenatal testing introduced unprecedented ethical problems.

The machines and procedures that were helping people stay alive beyond expectations, and bringing in new life otherwise thought impossible, were also raising ethical issues in the public conscience.

The United States’ civil rights movement, the Cuban Missile crisis, the resurgence of Third Wave feminism , the Vietnam War — the 60s invited debates around issues people didn’t think about before: abortion, euthanasia, capital punishment, or allocation of scarce medical resources.

Bioethics is, by nature, interdisciplinary. It combine philosophy, medicine, law, economics, religion, and public policy. It dives into questions like: what is the value of life? What is the significance of being human?

What it isn’t, is a traditional focus on doctor-patient relationships and the virtues of a good doctor.

Fun fact: the first heart transplant was performed in Christiaan Barnard in ’67. Though he died 18 days later, the surgery was considered a success and led to the development of respirators.

As a practical + theoretical discipline, bioethics wanted to determine how to make the right decisions through these steps:

Process: fact deliberation, value deliberation, duty deliberation, testing consistency, then conclusion
Image via Author

But it led to the first ethical issue we asked: mentally aware patients can’t be treated against their will, but what about patients who are unable to say “no”?

Here’s a quick example.

Karen Ann Quinlan in New Jersey, 1975 was in a chronic, persistent vegetative state and put on the respirator. Her doctors didn’t honour her family’s wishes to be taken off respirator in case it would be considered a case of unlawful homicide, so the case had to be taken to Supreme Court to make a decision.

Not only does this force the question of whether this would be considered murder, but also whether the medical decision someone made (while mentally competent) is still valid now that they’re mentally impaired.

What about human experimentation?

History is filled with horror stories of human experimentation: from Nazi German physician Josef Mengele, to the Holmesburg prison.

The US, especially, raised many case studies and concerns with human experimentation. Patients in the Jewish Chronic Disease Hospital, Brooklyn were being injected with live cancer cells without their permission. From 1965–71, Willowbrook State Hospital, New York was inoculating mentally impaired children with the hepatitis virus.

It was concerns for situations like these that led to the creation of the first bioethical institutions, who decided on three models to help understand bioethical issues.

3 models: straightforward application is utilitarian, where you make the best decision for all, car-mechanic model & medical
Image via Author

The straightforward application model was too reliant on abstract and general theory, while real life bioethical controversies are messy.

The car-mechanic model doesn’t give us definitive answers. Though cars (case studies) operate according to laws of physics (ethics), you don’t actually apply those laws to fix cars.

Here’s a great example. Theresa Ann Campo Pearson (famously, Baby Theresa), was born in 1992, and anencephalic (born without parts of the brain/skull). Her parents requested that her organs be transplanted to other children who might need them, since she had no hope for survival. Though her doctors agreed, Florida law didn’t. So after nine days Baby Theresa died, and her organs had deteriorated too much to be transplanted.

The main question here is: is it ethical to donate her organs for transplant to save other children, while causing her immediate death?

Or is it unethical to kill in order to save?

These are the principles- “it’s unethical to kill to save”, or “save as many children as possible”. There’s no definitive answer on which rule is more correct.

The biology/medicine model is the best approach. It means that while your family physician can approach your apparent seasonal stomach infection’s treatment with a preset of techniques, if your stomach still hurts 3 weeks later, only a doctor with the relevant expertise would know what to do.

The real difference? A unique problem (or ethical issue) would mean a unique, non-rote solution.

Artificial Intelligence, Healthcare, and the Ethical Dilemma

With a strong ability to learn from and integrate large sets of clinical data, AI’s potential role in healthcare is immense. It can conduct diagnoses, clinical decision-making, and suggest personalised medicine. An artificially-intelligent computer can diagnose skin cancer more accurately than a certified dermatologist.

But AI’s ethical conversations are struggling to keep up with the development of machine learning. The AMA Journal of Ethics’ major concerns are the risks to patient confidentiality and privacy, the boundaries between a machine’s and physician’s role in patient care, and reconstructing medical education to confront these inevitable changes in healthcare that come with the progress of AI.

Balancing the risks and benefits

AI can improve the efficiency and quality of healthcare delivery, but can also threaten patient confidentiality, informed consent, and autonomy.

How? Because people might feel uncomfortable with the idea of an AI robot operating on them instead of another human. Because if the robot makes an error, physicians might not be able to find out what happened and why. This is the black box problem, where the complexity of AI makes it illegible to anyone outside of understanding its basic inputs and outputs. Gaugarin Oliver’s fantastic article below on “Explainable AI” (XAI) for healthcare highlights ways we can overcome the black box problem and help medical practitioners hold a better accountability and understanding of using AI technology.

Legal issues like medical malpractice, product liability and the inability of an AI system to give a logical explanation for why it chose to give a patient that particular medicine also arise.

If physicians themselves don’t completely know how to explain AI’s predictions and errors to their patients, how does this impact the informed consent process in healthcare?

As of now, the best recommendation for maintaining bioethics in AI is its flexible incorporation into healthcare — AI should complement and assist, rather than replace, human roles and decision-making.

Bioethics is essential to the practice of not only healthcare, but any field involving the treatment or use of living organisms. Clinical trials and practices involve complex judgements that AI cannot replicate, the contextual knowledge to understand social cues, and genuine human compassion. With a immediate future in telemedicine during COVID-19, it’s these conversations around AI and ethics that we need to hold the most.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store