Your Health is Unfairly Determined by an Algorithm

You just fell off your bike and sense that a bone may be broken. You head into the ER and find out your injury requires surgery. The type of surgical treatment you are going to receive is determined by a computer, and your treatment will be entirely different than the person down the hall. This is the era of healthcare technology and algorithms being used in a vast majority of medical centers in 2023. As someone who grew up in a rural community with little to no access to physicians, I would often drive up to 2 hours to see a specialist such as a dermatologist or a surgeon. Thus the use of AI in healthcare seemed promising to me at first glance. Healthcare algorithms have been touted as a promising tool for improving patient outcomes and reducing costs. There is some evidence for algorithms in healthcare being able to change the landscape of care for patients. The thought is that by democratizing the access of information to providers across the country, these algorithms could help address some of the most pronounced geographic healthcare disparities. Despite the potential upside, I found there are some growing concerns after more information has surfaced through research studies and analysis on the use of algorithms in healthcare. Some contemporary evidence suggests that algorithms have inherent flaws of quantitative metrics which omit the qualitative foundation that healthcare is reposed upon. A practical example of such a flaw is the algorithms potentially leading to a worsening treatment gap for marginalized communities. I aspire that through a brief investigation into some of the published literature in the field of healthcare algorithms and data, the consequences of how healthcare algorithms have the potential to exacerbate the treatment gap for marginalized communities can be fully understood. I then contend that my solution is that healthcare providers must ensure that if algorithms are used, it is in conjunction with human judgment and expertise, rather than as a replacement for it.

One of my biggest gripes with healthcare algorithms is that they are often built using biased data. For example, if an algorithm is trained on data that is primarily from white patients, it may not work as well for patients of other races. This can lead to misdiagnosis, incorrect treatment plans, and poor health outcomes for patients of marginalized communities. This thought is best explained by the piece “What is Algorithmic Bias”. This piece focuses on how algorithms work and how “bias” refers to a set of systematic and unfair outcomes that arise from the use of algorithms or automated decision-making systems. These algorithms are influenced by certain characteristics such as race, gender, socioeconomic status, and other demographic factors on which the algorithm is trained. This bias can result in discriminatory outcomes that disproportionately affect certain groups and reinforce existing societal inequalities. The article highlights that the models and data are inherently flawed because they are only as good as the data used to train the algorithm. Elaborating on this, I think it would serve best to use personal anecdote as a frame of reference for a potential flaw as it pertains to healthcare algorithms. Two years ago, I recall going to a large medical center in Michigan for my orthognathic jaw surgery, being handed an iPad, and getting inundated with various demographical questions that were later paired with my medical history. I later learned from an attendant that this information was analyzed using a healthcare AI algorithm to determine various factors about my case which affected the type of provider I saw and consequently the type of treatment I received. During my appointment, I saw a nurse practitioner first, whereas other patients spoke directly to an academic surgeon. What I quickly learned was that patients on government-based insurance plans, consequently with non-emergent cases, did not get to see a supervising physician for their appointment. This decision was made by the healthcare screening algorithm I completed upon entry into the facility. While I eventually saw a surgeon and my surgery went well, this experience highlighted a potential downfall of these touted healthcare algorithms as they directly limited access to the highest quality of care just based on a binary questionnaire which struck me to have massive flaws. What if you had a rare condition that could only be recognized by a specialized provider? What if I struggled to answer the algorithms questions and this impacted my care? I believe this highlights where the potential upside of algorithms is immediately succeeded by its demise. Patient’s who came from a higher socioeconomic background, with private insurance, would have been seen by a faculty supervising physician over a nurse practitioner and thus have a different course of treatment.

Building on this, my experience is well supported by the CDC’s extensive research that LGBTQ, communities of color, in the lowest socioeconomic level have the least amount of published research studies, have fewer clinic visits, and have lower access to care by almost 3x of their counterparts. The issue is that healthcare algorithms may perpetuate existing health disparities. The issue is that healthcare algorithms may perpetuate existing health disparities. This inadvertent effect creates a skewed system where all of the data that is used to train healthcare algorithms omits to include the patients that actually need the healthcare the most.

For example, if an algorithm is designed to prioritize patients with certain insurance types, conditions, or symptoms, it may overlook patients who do not fit that mold and mark them as “less important”. This can be particularly problematic for marginalized patients, who may have different health concerns or symptoms than their counterparts. For ailments such as skin diseases like spindle cell carcinoma, having a wide arrange of data on different ethnicities for the algorithm is necessary due to the nature of how that pathology presents itself differently and uniquely through its symptoms. Therefore the use of healthcare algorithms in this manner, while promising, would need to be standardized in terms of the inputs long before significant implementation and having heavy significant independence without regard for human healthcare provider input.

Furthermore, as aforementioned, algorithms may reinforce systemic biases and discrimination that already exist in the healthcare system. I believe this is best supported by the book “Algorithms of Oppression” by Safiya Umoja Noble. In this piece, Noble contends that search engines and algorithms alike are not neutral, but rather reflect the values and biases of their creators, and can reinforce existing power structures. While not explicitly about healthcare, much of what Noble covers is transferable to what the looming concern should be on the potential issues that may arise from healthcare algorithms (IE: inadvertently favoring white patients over marginalized patients due to factors such as unconscious bias or historical discrimination). She cites examples such as biased search results for women and people of color and the ways in which search engines can be used to spread false information and hate speech. The book also highlights the need for greater diversity and inclusivity in the tech industry, and for more critical engagement with the ways in which algorithms are designed and used. Overall, “Algorithms of Oppression” offers a thought-provoking critique of the ways in which technology can perpetuate inequality and suggests paths forward for creating more equitable systems.

While much of my focus has been on the potential downside of these algorithms, I want to avoid being entirely cynical. Without healthcare algorithms, many individuals would have even lower access to care due to the national shortage of physicians. I do believe the potential algorithms and technology have the ability to augment a physician’s toolset by condensing a wide array of information into a digestible piece to be used in conjunction with a physician’s brilliant mind. Research supports that the future of the medical profession is one led by technology with the providers refusing to adapt being pushed to the wayside. It is therefore not a shocker that healthcare providers are going to implement AI and algorithms in nearly every medical specialty in the next decade. So how can you optimize this information to ensure you are getting the most holistic yet fair healthcare?

Get a second opinion: If you have concerns about the decisions being made by the algorithm, don’t be afraid to seek a second opinion from another healthcare provider. It’s important to get multiple perspectives on your care to avoid being boxed into a category as this is not how anyone’s medical plan should originate. Be sure to raise your questions or concerns frequently.

Stay informed: Keep up to date with the latest research and news about healthcare algorithms. This will help you understand how they are being used in healthcare and how they may impact your care.

Overall, the use of healthcare algorithms has the potential to be game-changing in how it can democratize the access of information to providers across the country in an array of medical settings. However, it must be understood that improper implementation of these algorithms will exacerbate the treatment gap for marginalized groups. It is important that healthcare providers and technology companies take steps to address these issues, such as by ensuring that algorithms are built using diverse and representative data and by regularly auditing algorithms to check for bias and discrimination regularly. Furthermore, healthcare providers must ensure that algorithms are used in conjunction with human judgment and expertise, rather than as a replacement for it. By taking these steps, we can ensure that healthcare algorithms are used in a way that benefits all patients, regardless of their demographical background.

--

--