AI and Ethics in Healthcare

Jerall Yu
Digital Society
Published in
6 min readApr 28, 2024
Image by Galina Nelyubova on Unsplash

Can you believe that artificial intelligence is poised to fill nearly a third of the healthcare service gaps in underserved regions by 2030? The rapid progress of AI technologies holds the potential to transform every facet of healthcare, ranging from diagnosis to treatment. This blog post delves into impact of healthcare in AI in two main areas of increased automation and enhanced predictive modelling, while also addressing some of the ethical dilemmas. Although AI’s integration within certain healthcare systems such as the NHS is currently limited to specific applications, its potential to reshape healthcare practices globally is immense.

Increased automation is crucial due to the global crunch in the healthcare sector. In the United States, it is estimated that despite growing demand, there will be around 3.2 million less healthcare workers due to job displacement by 2026. Digitalisation and automation in various areas of the healthcare sector are vital to ensure that smooth operation of the sector. Another way that AI can contribute to the healthcare sector is through predictive modelling, which can be through algorithms designed to improve patient outcomes or for early detection of illness.

Despite the plethora of benefits that AI and digitalisation can bring, we also have to be cautious of the boundaries and limitations of AI, which largely revolves around ethical dilemmas and whether we can manage them or come up with a fair framework.

Automation in Healthcare

Do you sometimes feel that medical appointments are inefficiently or unfairly managed? AI promises a solution by automating time-consuming administrative tasks in healthcare.

Using AI to improve administrative efficiency in healthcare

Administrative tasks in the healthcare sector includes initial assessments of inquiries, filing insurance claims, organising patient records and scheduling appointments. With digitalisation, many of these processes have now been shifted online, such as onto mobile applications or web systems. In fact, a study found that around 44% of administrative tasks carried out by staff in general practice can be almost fully automated. One specific benefit mentioned was that digitalisation can significantly reduce human errors in data entries of patients’ particulars. By reducing the administrative burden for staff, there can be a greater focus on direct patient care which can ensure that more patients get the timely treatment they need.

An example of this is the NHS app, which serves as a all-in-one platform that enables people to view their personal health information clearer, and for GP to better access patient background. These online tools serve as a complementary approach to empower individuals to a more streamlined approach for more efficient health services. The adaptation of such services has been fast, and the NHS aims to have 75% of the adult population be registered for the app by the end of 2024.

STAR Robosurgeon

Other than automating routine tasks, AI is expanding its role into more complex medical procedures. One outcome notable example of AI automation is the Smart Tissue Autonomous Robot (STAR), which can perform intestinal surgery with minimal human intervention. Pre-clinical models have shown that STAR performs with fewer errors and more consistency than experienced surgeons in soft tissue surgeries.

Predictive Modelling and Personalised Medication

Another key application of AI in healthcare is for predictive modelling, especially in urgent health crises. During the COVID-19 pandemic, National COVID-19 Chest Imaging Database (NCCID) by NHS utilised AI to analyse a vast array of imaging data. Insights drawn from AI-enabled analyses of MRI and CT scans have significantly improved patient outcomes by enabling earlier and more accurate diagnoses of COVID-19 complications.

Image by Cambridge University Hospitals

Another form of predictive modelling is for generative AI to assist with creating personalised medications. In order to customise medicines to each patient’s unique genetic profile, AI systems can evaluate enormous quantities of patient information, including genetic data. This method can reduce the likelihood of negative medication reactions, while simultaneously improving therapeutic efficacy. For example, physicians might select the most effective treatment plan with minimal side effects after using AI-driven systems to anticipate a patient’s response to chemotherapy medications based on genetic markers. AI-enabled personalised medicine can also go beyond treatment to preventative care, allowing for early interventions based on individual risk factors. This has the potential to completely change the healthcare system from one that is reactive to one that is proactive.

For example, Biotech business Insilico Medicine employs artificial intelligence (AI) to accelerate medication discovery, particularly for difficult-to-treat conditions like Idiopathic Pulmonary Fibrosis (IPF), a serious lung illness for which there are no viable treatments. Using generative AI is a novel and interesting way of drastically reducing the protracted conventional drug development process, providing faster and more affordable routes to novel medicines.

Ethical Concerns

Despite the promising benefits of AI in healthcare, it brings forth substantial ethical challenges. How do we protect patient privacy when AI systems process vast amounts of personal data? Can we ensure that AI algorithms are free from bias, or will they perpetuate existing disparities in healthcare access? These are critical questions that need addressing to harness AI’s full potential responsibly.

As AI systems depend on vast volumes of data to operate, there is often a chance that private data may be accessed or misused. Patients are entitled to control over who can access and privatise their personal data. Nevertheless, marketers frequently use patient medical records for their own commercial gain, using them to target susceptible individuals with inappropriate or deceptive ads for drugs, therapies, or other healthcare supplies. Moreover, there is an issue in understanding, validating and debugging the AI model. Doctors may use the AI models to predict a patient’s future health problems or outcomes, but may not be able to explain to a patient how did the AI model reach such an outcome. This is commonly referred to as the “black box problem”.

One possible solution to the ethical concerns of growing AI usage is stronger regulatory frameworks. These rules could better guarantee the effectiveness, safety, and lack of biases in AI systems that endanger patients. For example, the General Data Protection Regulation (GDPR) of the European Union offers rules that could be modified for AI, with a focus on patient data protection, responsibility, and openness. Applying this to the healthcare sector, extensive testing and validation of AI tools before using in clinical settings could ensure a more responsible usage of AI. Another growing area of research to target the black box issue of AI appears to be using explainable artificial intelligence (XAI) as a technique to explain how the algorithms work, thereby increasing confidence in using AI in the healthcare sector.

Reflection

Having finished this course on the digital society, I feel that I have gained a much broader understanding of various aspects of the digital world that impact our daily lives. Specifically, choosing to take a deep dive into the area of AI and ethics have challenged my understanding of how we need to be aware of the trade-offs and considerations when using such technology. Throughout the course, I have honed my ability to critically analyse information. My understanding of the ethical dilemmas presented by AI, such as bias in algorithmic decision-making and privacy concerns, have grown significantly. These insights have encouraged me to think across multiple disciplines, despite primarily focusing on the impact on the healthcare sector in the review.

As I am majoring in psychology, the developments in the healthcare sector has also inspired me to think about how AI and digitalisation can impact various fields of psychology. For example, seeing how the NHS uses machine learning to analyse COVID-19 data to improve patient outcomes made me wonder if we can do something similar for mental health illnesses. Upon researching, I found that this is in fact consistent with novel psychological intervention methods. For example, online cognitive behavioural therapy (iCBT) is being explored on the basis of using Computerised Adaptive Testing (CAT) to effectively screen and triage people seeking mental health treatment. Such AI-related methods provide a more efficient and targeted approach by incorporating the use of predictive modelling to assess which aspect of mental health treatment can better benefit the patient.

I also learnt that it is important to address the ethical concerns of technological developments through careful regulation and close monitoring. Up and coming developments such as XAI also provides a promising solution to the black box issue, possibly encouraging greater usage of AI in the healthcare sector in the future.

--

--