Factors Influencing Trust in Medical Artificial Intelligence for Healthcare Professionals

AI4HEALTH Article Reviews #02

Sahika Betul Yayli, MD
CodeX
3 min readJun 18, 2022

--

While developing artificial intelligence applications in healthcare, one of the most unpredictable parts is how much it is used by end-users. Sometimes even if the comments received from end-users during the development phase are ‘You are developing a product that will really help us!’, the product may not be sustainable in use when it is available. Although there are many different reasons for this, one of the leading reasons is trust.

In order to facilitate adoption of AI technology into medical practice settings, significant trust must be developed between the AI system and the health expert end-user.

Here’s a review I’ve read on this subject and the inferences that I consider important here:

✅ There are several concepts established in the discipline of engineering that are well understood to contribute to increased end-user trust in AI systems, including, but not limited to, interpretability, explainability, robustness, transparency, accountability, fairness, and predictability. However, in contrast to the engineering literature, there appears to be a gap in the medical literature regarding exploration of the specific factors that contribute to enhanced trust in medical AI amongst healthcare providers.

✅ Overall, explainability was discussed consistently across 23 included articles, and was the factor examined the most often. This suggests it is one of the most important concepts for trust in medical AI. Transperancy, Interpretability, reliability and education follows.

✅Among the articles that solely explored AI trust concepts quantitatively, education and usability were discussed most often. Other top contributory factors to enhanced end-user trust in medical AI identified quantitatively were explainability, privacy, GP involvement in tool design and dissemination, and perceived usefulness.

✅ Writers identify the need to better evaluate and incorporate other important factors to promote trust enhancement and consult the perspectives of medical professionals when developing AI systems for clinical decision-making and diagnostic support.

Trust Factors

There were nine trust concepts that were consistently identified through both qualitative and quantitative methodologies, however, were more frequently analyzed qualitatively, including:

  • Complexity (5.3% of total articles identified)
  • Accuracy (5.3%)
  • Continuous updating of evidence base (7.0%)
  • Fairness (8.8%)
  • Reliability (10.5%)
  • Education (10.5%)
  • Interpretability (14.0%)
  • Transparency (28.1%)
  • Explainability (45.6%)

The sixteen trust factors that were only analyzed qualitatively included:

  • Data representativeness (1.8%)
  • Standardized performance reporting label inclusion (1.8%)
  • Fidelity (1.8%)
  • Ethicality (1.8%)
  • Lawfulness (1.8%)
  • Data discoverability and accessibility (1.8%)
  • Compliance (1.8%)
  • Knowledge representation (1.8%)
  • Computational reliability (1.8%)
  • Relevance/insight (1.8%)
  • Consistency (3.5%)
  • Causability (3.5%)
  • Predictability (5.3%)
  • Dependability/competence (5.3%)
  • Validation (7.0%)
  • Robustness (7.0%)

The additional 10 factors that were only analyzed through quantitative methods included:

  • Availability (1.8%)
  • Effort expectancy (1.8%)
  • Endorsement by other general practitioners (GPs (1.8%)
  • AI agreement with physician suspicions (1.8%)
  • Information security (3.5%)
  • Performance expectancy (3.5%)
  • Sensitivity to patient context (3.5%)
  • Alignment with clinical workflow (3.5%)
  • Perceived usefulness (5.3%)
  • GP involvement in tool design and dissemination (5.3%)

Thanks for this valuable article:
Victoria Tucci, Joan Saary, Thomas E. Doyle

📑 Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. J Med Artif Intell 2022;5:4.
📑 Click here for PDF of the article

--

--