Why medical ethics will become more important than ever during the next decade
When considering the most important aspect of delivering quality healthcare, your mind might go to cost, access, technology, or training. In reality, it’s something more fundamental — and yet connected to all the above. It’s ethics. In fact, while everything has changed about costs, access, technology, training, and more (many, many times over the years), the common denominator in healthcare — ethical delivery — has changed very little.
Case in point: the Hippocratic Oath. First composed in ancient Greece, the Hippocratic Oath is a fundamental part of the process of becoming a physician; its pronouncement is a rite of passage for new doctors all around the world. The Hippocratic Oath has been altered several times to reflect changes in medicine, but its core purpose is still to put a voice to the basic ethical principles that are so fundamental to the practice of medicine. Now, however, those ethics are seemingly on a crash course with a quickly-expanding technological bubble.
From smart wearable devices to artificial technology, a slew of new technology is ushering in an era of massive disruption to the field of healthcare that could rival changes seen at the turn of the 20th century, when evidence-based medicine first took off. However, despite what some movies portray, robots and AI won’t be replacing human doctors, nurses, or most other specialists anytime soon.
In fact, while technology is already making the job of many healthcare providers easier, it’s simultaneously making other aspects harder. New health tech presents new challenges — particularly in the ethics department.
Identifying Ethical Issues
When it comes to the inevitability of growing ethics concerns in medicine, one need look no further than artificial intelligence. AI is no longer just a “concept” when it comes to patient care — it’s here, and expanding quickly. Medical professionals are already using AI-based tools for medical record transcription, insurance fraud investigation, and even surgical assistance. Some experts predict that healthcare AI will be a $20 billion market by 2025.
Bias
That kind of rapid growth doesn’t come without problems. For all the potential of AI-based healthcare, the tools themselves do have noticeable weak spots, one of which is unintentional bias. For example, when it comes to diseases that are either rare or new, relevant data is often severely limited. AI tools may therefore neglect to see the value in certain treatment protocols. While AI-powered technologies may be able to help detect and diagnose rare pathologies, they can still struggle to be effective when it comes to treatment strategy — especially in areas that lack data.
Furthermore, the data being fed to AI algorithms tends to reflect healthcare and outcome disparities based on gender, race, and other demographic differences. Unless AI can be engineered to overcome these pre-existing biases, those disparities are likely to become even more entrenched.
Limitations
In a case study presented by researchers from Columbia University, Microsoft, and New York Presbyterian Hospital, an algorithm made the surprising prediction that patients who had both pneumonia and asthma were more likely to survive than those who only had pneumonia. The researchers instantly knew this must be incorrect, but why had the AI reached that conclusion?
The answer was that the algorithm missed a critical piece of knowledge — that patients with both asthma and pneumonia were automatically admitted to the hospital’s intensive care, thus greatly improving their likelihood of recovery. In such an instance, relying on AI rather than a caregiver’s direct knowledge would have inevitably led to much worse patient outcomes.
Responsibility
If — or more likely, when — AI does make a mistake, who should bear the responsibility (and blame) for the consequences? The healthcare provider who implemented the AI-based tool, or the engineer who designed it? Or, in some cases, should it be the patient who failed to divulge critical information the AI needed?
It would be wonderful if we lived in a world where people were more accepting of mistakes in general, but when one’s health and perhaps even life is on the line, things change. It’s understandable that patients will want to understand their risks when it comes to new technological advancements in healthcare like AI, and providers will want safeguards against legal consequences.
Bottom Line
Bias, limitations, and responsibility are just the tip of the iceberg in terms of medical ethics. One of the biggest issues of all — patient privacy — hasn’t even been discussed here (mainly because it warrants more than a paragraph or two of exploration).
The reality is that while technology will continue to massively improve health and healthcare, medical professionals and policy-makers alike would be remiss to ignore the obvious ethical dilemmas this wave of disruption carries with it.
Preparing for the changes ahead goes far beyond technical problems related to engineering reliable tools and systems. It requires an in-depth understanding of the related ethical challenges, and will necessitate educating providers on those challenges, and likely creating entirely new ethical frameworks within which medical professionals will need to work.
In other words, medical ethics will be one of the most critical and ongoing parts of medicine in the next decade. It’s not enough to simply innovate in terms of technology in medicine; we need proactive and ongoing discussions and adjustments when it comes to the ethics of utilizing these new innovations on real, live human beings. It’s an issue that will require proactive and ongoing thought leadership, empathetic decision-making, and creative new solutions as varied as the technologies themselves.