Examining Willingness to Disclose Remote Patient Monitoring Information to Receive Health Advice from a Doctor or AI

Tamir Mendel
ACM CSCW
Published in
4 min readOct 31, 2024

This article is based on the original research paper “Tamir Mendel, Oded Nov, and Batia Wiesenfeld. 2024. Advice from a Doctor or AI? Understanding Willingness to Disclose Information Through Remote Patient Monitoring to Receive Health Advice. Proc. ACM Hum.-Comput. Interact., 8, CSCW2, Article 386 (November 2024), 34 pages, https://doi.org/10.1145/3686925

Digital blood pressure monitor
Digital blood pressure monitor. Photo retrieved from Pexels. Photo by Yaroslav Shuraev.

Remote Patient Monitoring (RPM) is a digital technology that records a person’s medical data outside the clinic and transmits it in real time to a healthcare provider. RPM holds great promise for improving the treatment of chronic and dangerous medical conditions such as hypertension. Blood pressure readings that are measured at home offer large quantities of data that reflect patients’ typical daily experiences. However, analyzing and providing advice from the RPM-acquired data may be overwhelming to already-overloaded healthcare providers. Artificial intelligence (AI) systems can help, analyzing patient data and offering automated insights. But is a person’s willingness to disclose RPM-acquired health information influenced by whether a human doctor or an AI system will analyze the data disclosed?

Previous studies found that security risks and trust in AI systems influence AI adoption and use by clinicians and patients. As the trust-risk model suggests, trust and perceived risk shape individuals’ willingness to disclose information. Additionally, people with less severe health problems (who may therefore have less need for medical advice) are less sensitive to privacy concerns when deciding whether to disclose their health information. However, research has not explored how the interplay between people’s privacy and security calculus, their trust in AI systems (versus trust in human doctors) and the severity of their health condition shapes their willingness to disclose RPM-acquired health information.

How did we explore people’s willingness to disclose RPM-acquired health information?

To explore the combined effects of advice source (AI systems versus doctors), health severity (high and normal blood pressure), and security risk (high, modest, and low-security risk) on people’s willingness to disclose RPM and trust in the advice source, we conducted an experiment with 484 participants.

We presented participants with a scenario of blood pressure measurement using an RPM-based health advice system. We randomly assigned participants to conditions where they were told the advice source was either a human doctor or AI, health severity was normal (healthy) or high blood pressure (high health severity), and security risk was low, modest, or high-security. Figure 1 presents the scenario with advice from AI, high blood pressure, and low security risk. Participants were presented with the scenario and a questionnaire about willingness to disclose RPM-acquired health information (e.g., I am willing to regularly share my blood pressure records with the [hospital-developed AI system/doctor] through the mobile health app in order to receive a health recommendation) and trust in the advice source (e.g., I can always rely on the [artificial intelligence system/doctor] for providing health advice] through the mobile health app in order to receive a health recommendation).

An example of scenario with high blood pressure, advcie from AI, and low security risk
Figure 1. Presents the scenario with high blood pressure, advcie from AI, and low security risk.

What did we find?

Figure 2 shows that participants’ willingness to disclose RPM-acquired health information and trust in the advice source were higher when they believed the advice would come from the doctor than from the AI. Willingness to disclose information and trust in the advice source were higher in the low-security risk condition than in the modest and high-security risk conditions. Willingness to disclose information was significantly higher for those with more severe health conditions, who need the health advice more, than for those who were healthier. Health severity was marginally significant as a predictor of perceived trust in the advice source.

Figure 2. Mean of users’ willingness to disclose and trust as a function of advice source, risk severity and health severity conditions. Error bars represent the standard errors.
Figure 2. Mean of users’ willingness to disclose (A) and trust (B) as a function of advice source, risk severity and health severity conditions. Error bars represent the standard errors.

We also tested whether the effects of the advice source on participants’ willingness to disclose information was mediated by perceived trust using mediation analysis. Interestingly, as can be seen in Figure 3, at any level of trust in the advice source, rather than demonstrating algorithmic aversion people display evidence of algorithmic appreciation, with respect to greater willingness to disclose health information to the AI than a doctor.

Figure 3. Relationship of trust and willingness to disclose: The average trust in AI advice (M=3.58, dashed vertical red line) is lower than trust in a doctor (M=5.37 dashed vertical blue line). The positive relationship between trust and willingness to disclose is stronger for AI (solid red line) than a human doctor (solid blue line).
Figure 3. Relationship of trust and willingness to disclose in Study 3: The average trust in AI advice (M=3.58, dashed vertical red line) is lower than trust in a doctor (M=5.37 dashed vertical blue line). The positive relationship between trust and willingness to disclose is stronger for AI (solid red line) than a human doctor (solid blue line).

Increasing willingness to disclose RPM-acquired health information to receive Advice from AI

Our findings suggest that increasing trust in AI, perhaps through positive cues from healthcare providers, user training, and AI explainability, may close the gap between a patient’s willingness to disclose RPM-acquired health information to AI versus doctors. Adoption of RPM systems by healthy individuals may be reduced by their lower willingness to disclose their health information, suggesting that different strategies may be needed to penetrate that market segment. Finally, designers and developers of RPM systems should take steps to address security risks, such as providing privacy controls and a secure network, in order to increase patients’ willingness to disclose RPM-acquired health information.

Read the full paper here.

Citation Format:

Tamir Mendel, Oded Nov, and Batia Wiesenfeld. 2024. Advice from a Doctor or AI? Understanding Willingness to Disclose Information Through Remote Patient Monitoring to Receive Health Advice. Proc. ACM Hum.-Comput. Interact., 8, CSCW2, Article 386 (November 2024), 34 pages, https://doi.org/10.1145/3686925

--

--

ACM CSCW
ACM CSCW

Published in ACM CSCW

Research from the ACM conference on computer-supported cooperative work and social computing

Tamir Mendel
Tamir Mendel

No responses yet