Trusting AI in Healthcare

Somatix
Get A Sense
Published in
4 min readMar 15, 2023

The adoption of Artificial Intelligence (AI) in healthcare has been on the rise in recent years, with AI systems being used for tasks such as disease diagnosis, drug development, and patient monitoring. However, this use of AI in healthcare has raised concerns about trust in both patients and healthcare providers. The trustworthiness of AI systems is an important factor in determining their acceptance and success in healthcare, and many technologies seek to mitigate this vital tenet.

Patients

Patients are concerned about the use of AI in healthcare and its potential impact on their privacy and trust in healthcare providers. They worry about the accuracy and reliability of AI systems, and whether they can trust their healthcare providers to use them effectively and responsibly. About 60% of U.S. adults reported that they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center. Many of these patients are unconvinced that the use of AI in medicine can improve health outcomes — the same survey finds that only 38% of patients believe AI being used to diagnose disease and recommend treatments would lead to better health outcomes for patients generally.

A Harvard Business Review study offered business school students the opportunity to take a free assessment that would provide them with a diagnosis of their stress level and a recommended course of action. The results demonstrated that 40% signed up when they were told that a doctor was to perform the diagnosis, but only 26% signed up when a computer was to perform the diagnosis.

Yet, it does not appear that patients believe AI provides inferior care, or that it is more expensive (in fact, AI seeks to reduce healthcare costs). Mistrust appears to come from a belief that AI cannot offer personalized solutions. We all believe ourselves to be unique individuals, insofar that not every AI recommendation can apply to all of us. Many patients view medical care delivered by AI as standardized, as a one-size-fits-all remedy. And while this is true, and there can be situations where AI-generated advice is not applicable to every person, many AI-based solutions are using algorithms that help tailor to each individual person.

For example, Somatix’s SafeBeing™ AI wearable device is an innovative solution to mitigate issues of trust. SafeBeing is a wrist-worn device that monitors patients’ activities of daily living (ADLs) and detects any deviations from their normal behavior. It uses machine learning algorithms to analyze the data collected and provide insights into patients’ health and well-being. The device is non-intrusive, does not require any input from patients, and does not collect any personally identifiable information.

SafeBeing™ can help build trust in AI healthcare providers by providing personalized care based on patients’ individual health data, through passive monitoring and gesture-based technology that adapts to every user. It can also improve patient engagement by providing insights into their health and well-being and alerting them to any potential health issues and risks. SafeBeing™ can also reduce patient stress by providing an unobtrusive and non-invasive way of monitoring their health. The device empowers patients with information while still allowing them to continue their normal daily lifestyles.

Providers

Providers, such as physicians and other healthcare professionals, have their own set of concerns when it comes to trust in AI in healthcare. One of the main concerns is that AI technology may be seen as a replacement for human expertise and decision-making skills. This can lead to mistrust and reluctance to adopt AI technology, as providers fear that it may devalue their skills and expertise. Still, research finds that most providers do not believe AI tools can ever fully replace the human patient-provider relationship.

AI may also introduce errors or biases into the decision-making process. For instance, an algorithm used to predict hospital readmissions was found to have racial bias, as it underestimated the risk of readmission for black patients. Providers may worry that such biases could lead to unfair treatment of certain patient populations.

On the other hand, AI-based technologies can empower providers themselves to make more accurate and timely diagnoses by analyzing large amounts of patient data and identifying patterns that may be difficult for humans to detect. AI can also help providers personalize treatment plans based on individual patient characteristics, leading to better outcomes and improved patient satisfaction.

Hence, there are certainly valid concerns surrounding AI when it comes to issues like bias, privacy, and accuracy. However, both patients and providers alike may see how AI-based tools like SafeBeing™ provide innovative solutions that can mediate these worries. Patients may appreciate the convenience and insights, while providers appreciate the accuracy and reliability of AI systems. Furthermore, technologies like SafeBeing™ can provide personalized care based on patients’ individual health data, which can lead to improved patient engagement and outcomes. As technology continues to advance, it is likely that these solutions will become an increasingly important tool in the delivery of healthcare.

--

--