AI in Healthcare: Exploring the Impacts and Implications of Artificial Intelligence & Medical Technology

Melanie Kuo
8 min readMar 9, 2023

--

By Ryan Nguyen, Pooja Thorali, Melanie Kuo, and Kassidy Gardner

Image Source

It seems that the future of technology in healthcare has finally arrived — in the form of artificial intelligence. The concept of “artificial intelligence,” or AI, was coined at a Dartmouth College conference in 1956, and has since made vast advancements in the medical field through machine learning algorithms (Yu et al., 2018). Today, AI tools are used for a wide variety of reasons in the field of medicine, including diagnosing individuals, recommending treatment processes, and predicting which populations may be at risk for specific diseases (Davenport & Kalakota, 2019). While AI may seem like a helpful tool for both healthcare providers and patients alike, there are several biases to be made aware of with use of these technologies. For instance, those who belong to a specific racial population may have a higher risk of being underdiagnosed for diseases, putting them at higher risk of facing the effects of underdiagnosis than other populations (Seyyed-Kalantari et al., 2021). Many of these issues, like underdiagnosis, are rooted at the core of skewed databases that lack diversity; A core issue associated with the use of AI in healthcare is due to how AI models are trained on real-world data. When these models are integrated into the field of healthcare, such data may exemplify and fuel pre-existing biases integrated within society (Panch et al., 2019).

If the issues with healthcare-based AI regarding individuals who belong to these populations — which are often segmented by factors of sex, race, and socioeconomic status — are not addressed, those of underrepresented groups could potentially experience poor healthcare treatment and outcomes. Our goal for this project is to bring awareness to this issue, and inform both potential healthcare patients and those who are potentially interested in working in the field of AI healthcare in the future about these risks, as to better implement changes surrounding these key issues.

Image Source

Our final project, a site called “AIHealthcare,” aims to address the issue of biases in medical technology and AI, and to provide a comprehensive and interactive platform for education and discussion. Biases in these systems can have significant impacts on patients, healthcare providers, and society as a whole. They can lead to inaccurate diagnoses, improper treatments, and unequal access to healthcare services, which can ultimately result in negative health outcomes and disparities in healthcare. Our motivation for this project is to raise awareness about these biases and their effects, and to encourage individuals to think critically about the role of technology in healthcare. We believe that by providing a platform for education and discussion, we can inspire individuals to work towards creating more equitable and unbiased healthcare systems.

Our project builds on related projects in the field of healthcare and AI by focusing specifically on biases in medical technology and AI. The advent of AI technologies like chatGPT has brought about several possibilities for improving healthcare delivery. One of such possibilities is the simplification of medical reports for enhanced patient comprehension. However, despite the high accuracy of chatGPT in simplifying medical reports, a study by Jeblick et al. (2022) revealed that some exceptions exist, where the simplified reports may contain imprecise information, misinterpret the original report, or exclude critical details. These exceptions may pose inadvertent harm to a considerable proportion of patients if deployed on a large scale. As the study primarily focused on radiology medical reports, it is crucial to investigate how AI could impact the healthcare industry on a broader level and alert the groups that may be affected by the widespread deployment of AI healthcare models and to raise awareness and discuss the potential downsides of these platforms since there is so much hype around AI. Another relevant work that we found in this space of AI and algorithm bias, specifically in relation to healthcare, is a Harvard study and website by Katherine J. Igoe talks about algorithms in healthcare technology that exacerbate existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation amplify inequities in health systems. This study is also important because it talks about how data science teams can prevent and mitigate algorithmic bases in healthcare and what steps we might take to address it, which is one of the scopes of our project we want to talk about. Igoe states that “data science teams should include professionals from a diversity of backgrounds and perspectives” (Igoe, 2021), and how clinicians should also be a part of these teams, so they can aid with providing an understanding of the clinical context. They discuss that something to consider to address fair standards is disclosing the inner workings of algorithm inputs, outputs, and parameters so clinicians and law makers can make informed decisions based on AI algorithmic results. While there have been previous efforts to explore the use of AI in healthcare and the potential benefits and risks associated with it, our project takes a deeper dive into the biases that can be inherent in these systems. By highlighting specific cases of bias and proposing potential solutions, we aim to contribute to the ongoing discussion and efforts towards creating more equitable and unbiased healthcare systems.

Image Source

The website we have developed is designed to be interactive and engaging, using eye-catching interactive design with visuals to make it accessible and easy to understand for all groups. It includes several panels covering different topics related to biases in medical technology and AI, including definitions of bias, types of biases, specific cases of bias, and proposed solutions for reducing bias in these systems. The website also features a quiz section that allows users to test their knowledge and reinforce the concepts presented throughout the website. This quiz section is designed to be both educational and fun, and encourages users to engage more deeply with the material presented. Overall, our project aims to educate and engage individuals in a discussion about biases in medical technology and AI, and to encourage the development of more equitable and unbiased healthcare systems.

Our initial plan for our website’s impact was to spread information and awareness about medical AI biases to educate anyone affected by healthcare technologies. Overall, we believe that we have successfully achieved this, as we have provided an easily accessible resource for both future leaders in healthcare-based AI and patients who are affected by AI medical predictions to learn more about what biases in medical AI look like, as well as their impacts, and explore potential solutions. With this in mind, we would also hope to achieve an even larger impact through this project, expanding upon these key ideas in the future. One way we would hope this could be done is to expand our audience by encouraging medical AI developers to implement some of the suggested solutions to eliminate biases, therefore improving accuracy in medical treatment, and for more of these biases to be recognized in currently existing systems. Our website is only a small overview of some of the biases in healthcare, and any large-scale or long-term impact requires much more than just being aware of the problem; unfortunately, there is much more that needs to be done in the creation, implementation, and testing of these systems to address these biases. If we were to continue working on this project, it would be interesting to keep exploring what the future of AI in medicine will look like and how things could be different, as well as the ethical and privacy concerns that are associated with AI in healthcare.

Image Source

Overall, the main issues that we have to consider when understanding the impacts of AI in healthcare is the fact that the data used to train algorithms and AI models are often unbalanced resulting in biased results. For example, “Health professionals may also misdiagnose and under-recognise mental illness in people of Black ethnicity, resulting in lower referral rates to specialist services and higher rates of involuntary admission through emergency pathways”(Mistrust). This is happening partly due to the bias in medical technology. This contributes to mistrust in the medical community and discourages people from seeking treatment or medical care, particularly in marginalized communities where stigma has led to fear of seeking care. The lack of diversity in the training databases as well as the environment that these algorithms are used in lead to the foundation of an inherently biased system which has large implications if these algorithms were to be implemented at a large scale. If AI models and algorithms were to be used for diagnosing people, recommending treatments or determining people’s healthcare risks, the inherent bias from both the environment and data that the model was built on would cause groups of people to receive unequal healthcare treatment. The models would reinforce existing biases and discrimination by causing marginalized groups to receive inadequate healthcare treatment via underdiagnoses, poor treatment recommendations or inaccurate determination of their healthcare risks. As such, our website becomes increasingly important to combat these impacts by informing those involved about the root causes and biases on these systems and provide potential solutions to the bigger problem space. Otherwise, underrepresented communities would suffer greatly from receiving unequal and inaccurate healthcare treatment and shift more power towards those maintaining these biased systems. By addressing these problems early on, we can safely advance in the field of AI and slowly integrate the technology into our daily lives without having massive consequences and damaging marginalized groups.

Take a look at our complete website here!

References

Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future healthcare journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94

Panch, T., Mattie, H., & Atun, R. (2019, November 24). Artificial intelligence and algorithmic bias: implications for health systems. NCBI. Retrieved February 20, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/

Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A. T., Topalis, J., Weber, T., Wesp, P., Sabel, B., Ricke, J., & Ingrisch, M. (2022, December 30). Chatgpt makes medicine easy to swallow: An exploratory case study on simplified radiology reports. arXiv.org. Retrieved February 3, 2023, from https://arxiv.org/abs/2212.14882

Igoe, K. J. (2021, March 12). Algorithmic Bias in Health Care Exacerbates Social Inequities — How to Prevent It. Harvard T.H. Chan School of Public Health. Retrieved February 20, 2023, from https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/

“Mistrust of mental health services: ethnicity, hospital admission and unfair treatment.” NCBI, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6998458/. Accessed 24 February 2023.

Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021, December 10). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature News. Retrieved February 3, 2023, from https://www.nature.com/articles/s41591-021-01595-0

Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719–731. https://doi.org/10.1038/s41551-018-0305-z

--

--