The Future of AI in Medicine

TiredMedStudent
The Quantastic Journal
9 min readJun 23, 2024

Imagine: it is the year 2040. Jeff Bezos has just unveiled a new AI-doctor, a digital doctor powered by AI. It has proven, beyond doubt, that it can conduct a full patient visit, from history taking to diagnosing and treating, as well any given doctor in any field. Would you use this technology?

If you wanted to see your normal doctor, you might have to wait 3–4 weeks. However, Bezos’ AI-doctor is easily available to you. You simply walk to the nearest health pod, are quickly scanned, and the machine tells you what is wrong with you and dispenses an appropriate medication.

To some, this may seem like a dystopian future. To others, it seems like the long-awaited for solution to their healthcare needs. Regardless, this future may be closer than you think.

Companies like Forward are creating “self-care” pods that can detect and help manage chronic conditions autonomously. Other startups are currently attempting to automate many parts of a doctor visit, such as history taking and diagnosis. Some are creating AI programs that automatically read and diagnose pathology slides or imaging studies. Some have speculated that AI can already outperform doctors in many key tasks such as reading EKGs and answering patient questions.

As we hurdle towards this seemingly inevitable future, it is imperative to stop and think about the implications of these technologies. What happens if we are successful?

The Difference Between a Human Doctor and an AI-Doctor

In theory, an AI-doctor would be interchangeable with a human doctor when they both produce the same outputs (or decisions) given a common input (patient). For example, if both the human and the AI-doctor order the same tests, procedures, referrals, and more, it would be difficult to tell a patient that one is going to produce better health outcomes than the other. If this were the case, many people would find it difficult to justify paying the expensive fees associated with a doctor’s visit, as opposed to a quick, cheap, and supposedly equally effective visit by an AI-doctor. Despite this apparent parity, there is still something missing in our analysis: the human touch.

In medical school, many professors spoke to me at length about how physical touch has helped them heal patients. So much so did my professors believe in physical touch, that they would often tell me that if I did not lay hands on my patients, I have not truly helped them. Scientists have attempted to answer precisely what it is that happens during physical touch that patients respond to. One large meta analysis found that physical touch and eye contact from the physician triggers a release in oxytocin, endorphins, and other neuropeptides that contribute to improved mood and stress reduction in the patient.

It’s hard to pin-point exactly what it is that human connection provides, yet both doctors and patients often testify that it aids in the healing process. Some studies have shown that when a patient views their relationship with their doctor as positive, they are more likely to experience positive outcomes. Others may postulate that perhaps this effect derives from the fact that patients who are happy with their doctor are more likely to experience placebo effect from interventions, or perhaps simply follow directions better. However, the evidence becomes more compelling as we dive deeper.

Another clear example of the healing properties of physical touch is skin-to-skin contact between mother and baby. Research indicates that this action not only contributes to the long term bond between mother and child, but additionally protects the baby from ailments like respiratory diseases. It is additionally associated with improved breast feeding, neurodevelopment, cardiopulmonary stability, and much more. The evidence behind skin-to-skin contact is so strong that most hospitals implement it as standard of care. Although these associations are not universally replicated, it is an incredible testament to the human condition that such a simple and distinctly human action has the potential to generate so many benefits.

I think this aspect of human connection and healing inherently makes a human doctor superior to an AI-doctor of equal competence. However, I don’t think this invalidates the utility of an AI-doctor. We can demonstrate this concept by comparing safety features in cars. Although luxury vehicles like Porsches may have additional safety functions that more basic Hondas do not, we don’t force everyone to buy Porsches. Likewise, many patients may not be able to afford an expensive human doctor, and in this case, having access to a good AI-doctor would be tremendously useful.

The Dangers of an AI-Doctor

The benefits of a competent AI-doctor are abundantly clear: it expands access to healthcare for many patients. However, many people fail to see the dangers of applying AI in medicine. To understand the problems at play here, we must first understand the basic concepts behind how AI works. Most of the AI models that are popular today are in fact large language models (LLMs), such as ChatGPT. LLMs don’t “think” or “reason” in the same ways that humans do. Instead, they essentially assign probabilities that a given word or character will follow another. The way an LLM determines the probability that a given word will follow another is by parsing through immense amounts of data. In this way, it can determine contextual relationships between words and conduct huge calculations to predict the flow of words reasonably well.

Similarly, LLMs will be able to replicate human doctor’s decisions because, for the most part, doctors make decisions based off of large data sets. Therefore, it is in theory possible to feed this data to an AI model and train it to approximate the same outputs of a current human doctor. Herein lies the problem: LLMs can only operate off of the data that is given to them. However, since they do not truly think or reason, they are unable to gauge the completeness of their own data. In other words, it is not self-aware and therefore not capable of determining whether its own outputs are being generated by high or low fidelity data. The gist here is that, if I feed an LLM bad data, it will produce bad outputs as opposed to a statement saying that its inputs were bad.

As I showed in a previous article, there are huge biases in the data that guides current medical decision making. Financial conflicts of interest, falsification of data, manipulation of statistics, and positive findings publishing bias account for only a few of these biases. This data is rapidly being used to train an ever increasing number of AI models in medicine. Eventually, the data set will become thorough enough to allow an LLM to approximate a human doctor. When this happens, we are essentially disarming ourselves of the only way in which we can correct our course.

When an AI-doctor is introduced, its outputs will be a lot more consistent than the average of most human doctors. This will, in effect, prevent us from realizing the hidden costs to our current decisions because there will be no variation in output. Basically, we will in effect be creating a gigantic self-fulfilling prophecy: we use faulty data to train a faulty AI model, which in turn will apply the principles perfectly and confirm that we are pursuing the best possible path for our patients. We won’t be able to tell that some negative outcomes could be prevented because we will have no data on alternative clinical reasoning. In other words, AI will lock us into our current medical thinking paradigm.

You may be tempted to think that this problem could be solved by constantly updating an AI model with the latest clinical trials and guidelines. However, this still does not address the key issue at hand: the underlying data, and its method of production, are inherently biased. As long as our medical trials exist within medical journals and receive funding from industry, the majority of data produced will be skewed, which in turn will only lull us into a false sense of security that our AI-doctors are practicing the best possible medicine. In other words, as long as we keep using skewed data, introducing AI-doctors will simply force us deeper into the bias.

Human doctors, on the other hand, have 2 distinct advantages that allow us to escape an incorrect medical paradigm. First, humans are imperfect. This allows for variations in outputs between doctors, and therefore allows us to see how different approaches affect patient outcomes. This variation introduces the possibility that, by sheer chance, we will notice that a given current approach to an illness is suboptimal (or perhaps outright damaging). Next, humans are self aware. That means that we are able to introspect and realize why we make the decisions we do. In other words, humans have the potential to tell whether the reasoning behind their decisions is faulty or biased. Both of these key qualities allow human doctors to correct their course, while their absence guarantees that AI-doctors will lock us in place.

What The Future May Hold

Many people state that the purpose of AI in medicine is not to replace doctors, but rather to work with doctors to improve healthcare outcomes. Current applications of AI in medicine focus on helping doctors with administrative tasks, such as documenting patient encounters, billing insurances, routing messages, evaluating quality metrics, and so on. These applications seem relatively helpful; it would be nice to actually make eye contact with my doctor, instead of staring at the back of their head as they type away on their computers. Additionally, decreasing these unfulfilling tasks in medicine would reduce burnout amongst clinicians, improve lifestyle, and improve access to healthcare by increasing the amount of patients a given doctor can attend to. It’s easy to see how AI could be a tremendous benefit to the medical field.

Although the aforementioned administrative applications of AI are indeed helpful, there’s no reason for development to stop there. In order for AI companies to assist doctors in these tasks, they will need to be integrated into clinician’s electronic medical record systems. This will allow AI companies to simultaneously collect data on patient presentations, treatments, and outcomes. In this way, they will generate the gigantic datasets necessary to train AI models to replicate human doctors’ decisions. It’s no secret that hospitals’ biggest cost is salaries and benefits for their employees. Therefore, it seems logical that if a hospital could reduce the amount of doctors needed on staff, it would. In other words, there is a lot of financial pressure incentivizing employers to favor an AI-doctor over a human doctor.

There does exist the possibility that, eventually, the conflicts of interest and barriers to entry are removed from medical research. In that case, one could imagine a world where an AI-doctor is constantly improving and practicing medicine perfectly. Although this would be a best case scenario, I believe that the financial conflicts of interest that currently exist in medical literature would only be exacerbated by the rise of potent medical AI. If pharmaceutical and medical device companies are already incentivized to affect the decisions of human doctors (that naturally vary significantly in clinical approaches and outputs), imagine how much larger the incentive would be when outputs are essentially uniform due to AI. In other words, the return on investment industry players would receive from pouring money into biased research papers and medical journals would be much greater with the rise of AI-doctors than it is now. Therefore, it seems that as AI becomes better and quickly moves towards approximating doctors, it becomes less likely that we will fix the issues with medical research.

Once this occurs, rising financial pressures will force hospitals and employers to cut costs, thereby forming a dichotomy amongst physicians: many doctors who practice “by-the-book” medicine will be unable to offer anything unique to patients, while doctors who are able to think independently and approach clinical situations uniquely will maintain utility for their patients. Unfortunately, I believe the vast majority of doctors fall under the former category. Amongst these doctors, only the most senior amongst them will retain positions in hospitals to supervise the AI-doctors’ actions. The rest may find themselves unable to compete with AI-doctors. The small number of physicians who produce unique outputs may be able to maintain small patient panels. However, these panels will likely be restricted to wealthier patients who can afford to pay for an expensive human doctor.

Obviously, I can’t predict exactly what will happen in the future. For all I know, maybe AI will save medicine. However, I see many worrying signs on the road ahead. Money is being poured into AI solutions with eyes fixed on the possibilities of quick, affordable, and accurate healthcare. Meanwhile, we continue to turn a blind eye to the root causes of the ailments that plague medicine today. We are at the forefront of a new world in medicine, and it is still up to us to decide what the future will look like. Will AI doom us to perpetuate our mistakes for eternity, or will we realize where we are headed before it’s too late? No one knows the future, but we can all strive to pave a better path for those who come after us.

--

--

TiredMedStudent
The Quantastic Journal

Recent medical school graduate, interested in medical reasoning