AI in a clinical setting: Will AI replace my doctor?

Jennifer Qiu
QMIND Technology Review
4 min readMar 11, 2024

From predicting Alzheimer’s disease to using MRIs to detect brain tumours and detecting cancer, these days it seems like AI can do just about anything. It has the potential to revolutionize healthcare by speeding up diagnosis times, skyrocketing efficiency and saving money. Despite this, AI applications in a clinical setting are still rare. Why?

The current main role of AI in healthcare is to support healthcare workers by improving diagnoses and clinical care, not replacing them.

There are still significant issues with AI in healthcare, such as:

  • Translating technologies from testing to actual clinical practice
  • Consistency
  • Lack of AI knowledge in healthcare
  • Societal and training data bias
  • The ever-changing nature of technology

Real-life Data

Many technologies that perform well in field tests can’t be applied to real-life clinical practice. For example, this journal article details that improving accuracy in AI-based clinical support tools (CSTs) does not always improve performance. This suggests that the clinician and patient’s perceptions of AI tools ultimately influence the effectiveness of these tools. Furthermore, an unintentional drawback is that AI tools have been shown to make harmful treatment suggestions for conditions such as cancer or pneumonia. Studies use labelled data to train and test their algorithms. These studies are important but make it highly possible for algorithms to have major performance issues when using real-life data. More studies need to be conducted but with data it’s going to use: from the real world.

Consistency is Key

You’ve probably heard the phrase “consistency is key”. This applies to many areas, especially technology. Algorithms that use different test data sets are difficult to compare. Without the same testing, clinicians have difficulty knowing which algorithms are the best to use in their practice. One possible solution is to have healthcare providers make their own test data sets and then use them to compare the different algorithms they want to use. If large data sets are available to use in open source, this helps to solve the issue, as many algorithms can be tested on the same datasets, which creates consistency and makes comparing algorithms easier.

The Importance of Education

There is also the fact that doctors go to medical school to learn medicine, so their training does not include AI or algorithm knowledge. Companies aim to create powerful and helpful technologies to improve the world of healthcare, but this is irrelevant if their clients can’t understand how it works. If developers and software engineers develop AI and algorithms that work as intended, they should also take the time to explain and train users so it’s usable. Without this, even the most groundbreaking technologies are useless- no matter how smart the users are.

Bias

Bias is a pretty big issue in machine learning. We know AI can help bridge inequities, but without supervision and proper data cleansing (fixing data that’s wrong, biased, or incomplete) AI can deepen inequities and cause serious ethical issues. Specifically, blind spots in machine learning contribute to these problems. Biases tend to disadvantage minorities or groups that face inequities because of race, gender and more.

For example, algorithms that can analyze and organize malignant moles accurately perform better on fair-skinned patients than on patients with deeper skin tones. Another example is in heart attack prediction. A model that can accurately predict heart attack likelihood in men will very likely be less accurate in predicting in women because of factors like representation in data and differing symptom presentation. This AI solution, which is intended to help in predictive care, therefore creates an additional health disparity and inequity. Proper awareness and education training can be used to prevent and recognize these biases. Along with this, having healthcare/bias professionals work with developers and software engineers during development and design can be used to make sure that biases can be recognized and eliminated before testing and implementation begin (also called human-ML augmentation). Analysis of data and testing sets in machine learning as well as data cleansing, should be applied during development, testing and implementation. Different techniques such as SHapley Additive exPlanations (SHAP) and Synthetic Minority Oversampling Techniques (SMOTE) can be used to mitigate and address biases. This is crucial as many societal biases are amplified through technology.

Quickly Changing Technology

Technology advances very quickly, which leads to many technological developments and discoveries. The concern with AI and algorithms in healthcare is that implementation of technology in hospitals and clinical settings is usually universal (for consistency) and this takes not only time to implement but also additional time for training, use and adaptation- which is costly. Because of the nature of the tech world, companies can quickly become obsolete and with them, many technologies become obsolete as well. As a result, it’s difficult for hospitals and clinical settings to integrate and use newer technologies, as they aren’t well-established and haven’t yet built trust with them.

In conclusion…

AI and algorithms are undeniably revolutionizing the healthcare and clinical world. The current consensus is that it’s not realistic for this technology to replace our healthcare professionals. AI can do amazing things, but right now it won’t be replacing your doctor (phew). Instead, it’s being used to improve treatment outcomes, help healthcare professionals diagnose more accurately, and save time and money. There’s no doubt the many major issues with AI in healthcare will persist and continue to be worked on by people around the world. What do you think? Do you trust AI and algorithms to be responsible for your health? With how fast AI is developing, there’s a lot of speculation about how different healthcare and clinical practice will look like in 20 years…

--

--