Governance First: Navigating the Spectrum of AI Concerns and Risks in Healthcare
and are co-authors of this article.
Why Do We Need a Governance-First Framework?
Healthcare and the Health Tech industry are increasingly turning to AI-driven solutions. At the same time, research has begun to expose how AI’s transformative potential is accompanied by notable risks, especially within generative AI applications.
Adopting a “Governance First” AI strategy is critical as the industry catches up with these technological innovations.
Healthcare is a multifaceted system encompassing preventative measures, diagnostic procedures, and treatment interventions to maintain and improve individual and public health. We are using this term to include:
- Traditional medical practices and services provided by healthcare professionals in clinical settings.
- Preventative care strategies designed to promote wellness and reduce the risk of disease, including public health initiatives and population health management programs.
- Diagnostic tools and techniques used to identify health issues and inform treatment decisions.
- Therapeutic interventions ranging from medication and surgical procedures to medical devices.
- Health technologies and digital systems that facilitate patient engagement, data management, and remote care delivery.
- Communication platforms and processes that enhance information exchange between healthcare providers and patients.
This holistic approach to healthcare aims to optimize patient outcomes, improve quality of life, and enhance health services’ overall efficiency and effectiveness by integrating traditional medical practices with innovative technologies and patient-centric communication strategies.
In this article, we identify some of the intricacies of addressing AI-related errors, ranging from minor inaccuracies to widespread systemic issues, and advocate for a robust, comprehensive governance framework designed for the healthcare industry.
The Spectrum of AI Concerns, Understanding the Landscape
Before exploring governance strategies, it’s essential to understand the spectrum of AI concerns that can arise in healthcare applications. These can range from localized errors — such as factual inaccuracies due to incorrect data — to metanarrative inconsistencies, where systemic biases or flawed logic can lead to broader issues in patient care.
AI concerns in healthcare can be categorized along a spectrum that includes biases, risks, and model limitations. These elements are often intertwined, affecting both the technical and sociotechnical aspects of AI deployment:
- Bias: A stereotype or disproportionate performance skew for some subpopulations, often resulting from historical data that reflect societal inequalities or data representing historical medical training practices that are not evidence-based.
- Risk: A socially relevant issue that the model might cause, such as exacerbating health disparities or undermining trust in medical institutions.
- Limitation: This likely failure mode can be addressed by following recommended mitigations, often tied to the model’s technical constraints or specialization of the model’s use case.
AI errors must be carefully managed, particularly in healthcare, where they can directly impact patient outcomes. The consequences of these errors in healthcare are not just theoretical — they can directly impact patient safety, equity, and the overall quality of care.
A thorough understanding of these concepts is essential for any sociotechnical professional — a strategist skilled at analyzing the long-term interaction of technology and society. This sociotechnical role is increasingly crucial in AI governance.
The Five Layers of AI Risk in Healthcare
To effectively address the spectrum of AI errors, it’s helpful to categorize them within the broader framework of AI risks. Matt Konwiser’s five-layer Model of AI Risk provides a comprehensive way to understand and mitigate these risks within healthcare.
Latent Risks
- Model Collapse: The degradation of AI model performance when one model uses another model’s outputs as training data can lead to compounded inaccuracies, particularly in predictive diagnostics.
- Discrimination in Healthcare Outcomes: Biases in training data can lead to discriminatory outputs, such as unequal treatment recommendations across different patient demographics, which can compound over time.
- Armageddon Decisions: In critical care, AI might suggest extreme or overly conservative diagnostic decisions or treatment options, potentially endangering patient lives. In large-scale emergencies, AI might default to a crisis management mode and respond with inhumane “who to save” triage decisions.
- Biased Treatment Pathways: AI could inadvertently create or reinforce biased treatment protocols that affect patient outcomes based on race, gender, or socioeconomic status, replicating or deepening current health disparities.
Intrinsic Risks
- Poor Fit for Clinical Use: AI models trained in non-clinical settings or a different disease state may not perform well in actual healthcare environments, particularly with rare or specific subtypes of diseases, leading to misdiagnoses or ineffective treatments.
- Excessive Automation in Patient Care: Over-automation can result in losing the human touch, which is crucial for patient trust and satisfaction.
- Over-reliance on AI in Critical Care: In emergency or emergent care situations, excessive dependence on AI can lead to delayed or inappropriate care decisions.
- Misaligned Diagnostic Outputs: Inaccurate AI interpretations can lead to incorrect diagnoses, affecting patient treatment plans.
Accidental Risks
- Regurgitation of Patient Data: AI systems trained on sensitive patient information may inadvertently expose that data, violating privacy and trust.
- Improper Training Leading to Mismanagement: Training AI on incorrect or incomplete data can result in harmful recommendations or actions, directly affecting patient care.
- Plagiarism: Without proper governance, AI might reproduce content or proprietary data exposed during training, raising intellectual property and data ownership issues in healthcare systems. AI might also incorrectly cite research or link to published research erroneously.
- Implicit Trust in AI Recommendations: Clinicians might place too much trust in AI outputs, neglecting to verify the accuracy and relevance of the recommendations.
Malicious Risks
- Ablation Attacks on Medical Devices: Malicious actors could reverse anti-bias measures in AI, leading to biased or harmful medical outcomes. AI could be directed to depart from approved dosing recommendations and ignore safety precautions, drug interactions, or age-related therapeutic guidelines.
- Poisoning of Diagnostic Algorithms: Deliberately introducing bad data to tamper with AI diagnostics can lead to widespread healthcare mismanagement.
- Disinformation in Medical Outputs: AI systems could be manipulated to spread false or misleading medical information, endangering public health.
- Existential Threats to Patient Safety: Sophisticated attacks could expose core AI logic, potentially leading to system-wide failures or manipulations in critical healthcare applications.
Personal Risks
- Overestimation of AI in Healthcare: Healthcare providers might overestimate AI’s capabilities, reducing critical human oversight and introducing multiple HITL points of failure.
- Isolation in Patient Interaction: AI could replace meaningful human interaction in care settings, negatively impacting patient experience and outcomes. For example, human contact and emotional support are critical factors in recovery outcomes for patients experiencing long-term recovery.
- AI Dependency among Healthcare Providers: Easy access and reliance on AI tools might lead to dependency, particularly in time and cost-cutting administrative environments, reducing the development of clinical judgment and critical thinking.
- Humanization of AI in Care Settings: Patients might begin to perceive AI as more human-like, leading to inappropriate levels of trust or emotional attachment and more skepticism of medical advice from healthcare providers.
Case Study: AI-Assisted Diagnostic Tool “CardiacAssistAI”
To illustrate applying a “Governance First” approach in healthcare, we provide an example of a hypothetical AI-assisted diagnostic tool, CardiacAssistAI, designed to provide healthcare providers with a “second opinion” for a range of cardiovascular diagnoses.
Use-case summary
Primary care doctors can use generative AI tools utilizing multimodal LLMs (MLLMs) and trained on specific differential diagnoses to enhance diagnostic accuracy. These tools aim to assist healthcare providers in making more informed decisions, especially in complex, rare cases and cases that need quick triage after diagnosis. The CardiacAssistAI is designed to provide an AI-assisted “second opinion” and provide initial screening and next steps recommendations before handing off to cardiology or the appropriate department.
In a real-life appointment, Dr. Smythe reviews their patient, Ms. Johnston’s symptoms using CardiacAssistAI. The AI suggests considering a rare cardiovascular condition Dr. Smythe had not initially considered. Dr. Smythe explains to Ms. Johnston that they used an AI tool to help guide the diagnosis, emphasizing that it is designed to supplement their medical expertise.
The AI’s interface allows Dr. Smythe to easily show Ms. Johnston how her symptoms align with the suggested diagnosis and what additional tests might be needed by a cardiologist, facilitating a transparent and informative discussion about the next steps. CardiacAssistAI also provides this information more quickly and references more research and evidence-based guidelines than Dr. Smythe could have provided on their own within a single healthcare visit.
In this use case, Dr. Smythe’s use of an AI-assisted diagnostic tool to guide their consultation with Ms. Johnston illustrates the potential benefits of AI in healthcare. However, to fully realize these benefits, the tool’s deployment should be guided by a “Governance First” approach. By overlaying key regulatory and ethical considerations across each layer of AI risk, healthcare providers can ensure that AI tools like this are effective, safe, equitable, and trustworthy.
Here’s what that governance framework might look like:
Integrated AI Governance Framework for Healthcare
Given the spectrum of AI concerns and the potential that these layered risks can diminish healthcare goals, governance cannot be an afterthought — it must be integrated into the AI development process from the beginning. After all, the mantra of “first, do no harm” is central to medical ethics and the mission of quality healthcare.
By following this consolidated framework, healthcare organizations can better anticipate and mitigate AI-related risks, ensuring that AI systems enhance patient care while maintaining safety and effectiveness.
Our proposed “Governance First” framework includes the following steps and considerations:
Establish Multidisciplinary Governance and Identify Project Risks
- Form a diverse AI governance committee that includes healthcare professionals, data scientists, ethicists, sociotechnical experts, patient advocates, and regulatory experts.
- Identify and categorize potential risks across all relevant areas (technical, ethical, legal, socioeconomic, operational) and the 5 layers of AI risk. For example, consider how model collapse and discrimination might impact patient outcomes when developing an AI diagnostic tool.
Conduct Readiness, Compliance, and Impact Assessments
- Evaluate how healthcare personally identifiable information (PII) and data handling regulations (e.g., HIPAA, GDPR) impact the project.
- Perform comprehensive impact assessments (risks, benefits, ethical considerations) on patient care, clinical workflows, and potential biases.
- Assess organizational AI readiness and identify potential priority-use cases and required resource investments.
- Ensure the AI system’s documentation includes warnings, potential harms, and mitigation strategies for technical and sociotechnical limitations.
Develop Robust Data Governance and Security Policies
- Implement strict data protection measures for patient information.
- Develop transparent patient consent processes that clearly explain the AI’s role in diagnosis.
- Establish clear protocols for data usage, storage, and sharing.
- Create enterprise-wide data definitions and quality control processes.
Align with Healthcare Objectives
- Ensure AI projects address genuine clinical needs and improve patient outcomes, optimizing patient touchpoints and engagement.
- Validate the effectiveness of AI tools in various healthcare settings.
- Adjust project scope based on regulatory requirements and risk assessments.
- Ensure that AI tools align with clinical best practices and contribute to reducing health disparities.
- Incorporate patient input on symptom reporting to refine the AI’s questioning algorithms.
Implement Bias Mitigation and Ethical Guardrails
- Design processes to identify and address biases in training data and outputs.
- Ensure diverse representation in the development and testing phases.
- Conduct regular equity assessments in AI model design and validation.
- Ensure cultural sensitivity and inclusivity in the AI’s language and approach.
Ensure Transparency and Explainability
- Create user-friendly interfaces for healthcare providers to understand AI reasoning.
- Design the user interface to facilitate HCPs’ clear communication of AI findings and recommendations to patients.
- Develop clear documentation on AI diagnostic decision-making processes.
- Create educational materials for patients about the AI tool’s use, risks, and benefits.
- Establish clear, measurable outcomes that account for AI performance, risk mitigation, and sociotechnical considerations. For example, ensure diagnostic AI tools have measures for accuracy and equity in healthcare delivery.
- Provide training to enhance AI and data literacy among stakeholders.
Establish Continuous AI Monitoring and Feedback
- Implement continuous model performance monitoring, engage sociotechnical strategists to assess societal impacts, and establish feedback loops with clinical teams.
- Regularly review and update AI models based on new medical knowledge and user feedback.
- Create channels for healthcare providers and patients to provide input on AI system recommendations and outcomes.
Additional Regulatory and Ethical Considerations in the “Governance First” Approach
To ensure that the CardiacAssistAI diagnostic tool and governance approach are effective and aligned with regulatory and ethical standards, it is crucial to overlay specific considerations across each layer of AI risk.
Below is a breakdown of the key regulatory and ethical considerations based on the use case of Dr. Smythe and Ms. Johnston’s interaction:
Latent Risks
Regulatory Compliance
- Data Protection: The CardiacAssistAI tool must comply with HIPAA regulations, ensuring that patient data used during training and consultations is securely stored and processed. Data anonymization should be applied where possible to protect patient privacy.
- Non-Discrimination: The tool must be validated to ensure it does not introduce bias into diagnostic suggestions. This includes compliance with the U.S. Civil Rights Act to prevent discrimination based on race, gender, or socioeconomic status in diagnostic outcomes.
Ethical Considerations
- Fairness and Equity: The CardiacAssistAI tool should be rigorously tested across diverse patient demographics to ensure its diagnostic suggestions are equitable and do not disproportionately affect certain groups.
- Transparency: Dr. Smythe’s ability to explain the AI’s suggestions to Ms. Johnston reflects the ethical need for transparency. The tool’s decision-making process should be clear and understandable to both the provider and the patient, facilitating informed discussions.
Intrinsic Risks
Regulatory Compliance
- Medical Device Regulations: If the CardiacAssistAI tool is classified as a medical device, it must comply with FDA regulations for Software as a Medical Device (SaMD). This includes pre-market approval, clinical validation, and continuous post-market surveillance.
- Clinical Validation: The AI model must be clinically validated to ensure its accuracy and reliability in providing diagnostic suggestions. This could involve peer-reviewed studies or real-world evidence demonstrating its effectiveness in diverse clinical settings.
Ethical Considerations
- Patient Autonomy: In the example, Dr. Smythe uses the CardiacAssistAI tool to supplement his expertise, not replace it. This respect for patient autonomy ensures that AI supports clinical judgment rather than dictating it.
- Informed Consent: Before or at the beginning of her appointment, Ms. Johnston should be informed about the use of CardiacAssistAI in her diagnosis and give explicit consent, particularly if the AI’s suggestions significantly influence her treatment plan.
Accidental Risks
Regulatory Compliance
- Data Breach Notification: If the CardiacAssistAI tool or its training models inadvertently expose patient data, the healthcare system must comply with data breach notification laws and promptly inform affected patients and regulatory bodies.
- Liability and Accountability: Clear guidelines must be established to determine accountability if the CardiacAssistAI tool provides a faulty or harmful diagnosis. This includes ensuring that healthcare providers, AI developers, and the healthcare system understand their legal responsibilities along each workflow stage.
Ethical Considerations
- Privacy and Confidentiality: The CardiacAssistAI tool must have robust privacy protections to prevent accidental data exposure. This aligns with the ethical obligation to maintain patient confidentiality.
- Accountability: The healthcare system should establish clear lines of accountability for the CardiadAssistAI tool’s suggestions, ensuring that Dr. Smythe retains ultimate responsibility for the diagnostic decisions made during the consultation prior to any handoff to a specialist.
Malicious Risks
Regulatory Compliance
- Cybersecurity Standards: The CardiacAssistAI tool must adhere to stringent cybersecurity standards, such as those outlined by NIST, to protect against hacking or malicious data manipulation that could alter diagnostic outputs.
- Anti-Fraud Measures: The healthcare system must implement anti-fraud measures to detect and prevent any malicious use of the AI tool, such as tampering with diagnostic algorithms to produce incorrect or harmful suggestions.
- State/local and Emerging Standards: California’s SB 1047 requires audit, reporting, documentation, and retention of metadata logs and other records of risk identification and mitigation efforts, among other areas of compliance. Other states have similar regulatory schemes or are in the process of developing them.
Ethical Considerations
- Harm Prevention: The CardiacAssistAI tool should be designed with safeguards against malicious use, prioritizing patient safety. This includes regular ethical red teaming and testing to identify vulnerabilities.
- Security Transparency: Patients like Ms. Johnston should be informed about the security measures that protect the CardiacAssistAI tool from tampering, reinforcing trust in the system’s integrity.
Personal Risks
Regulatory Compliance
- Patient Rights: The CardiacAssistAI tool must be designed to respect patient rights, ensuring its use aligns with existing patient rights regulations. This includes providing options for patients to opt out of AI-driven diagnoses if they prefer more traditional methods.
- Mental Health Considerations: The CardiacAssistAI tool’s interface and interaction style should comply with mental health regulations, avoiding designs that could increase patient anxiety or discomfort during consultations.
Ethical Considerations
- Human-Centered Design: The AI tool should enhance the patient-provider interaction, not replace it. Dr. Smythe’s use of the CardiacAssistAI tool to facilitate a transparent and informative discussion with Ms. Johnston exemplifies this ethical obligation.
- Ethical Use of AI: The healthcare system should ensure that AI is used ethically, enhancing care without creating dependencies or reducing the quality of human interaction in patient care.
Conclusion
As generative AI continues to expand and evolve within healthcare, the risks and potential errors associated with its use must be carefully managed. By adopting a “Governance First” approach, healthcare organizations can navigate the complex spectrum of AI errors and ensure that AI systems are safe, reliable, and effective.
Integrating governance into every stage of AI development is not just a best practice — it is mandatory for the future of healthcare.
Call to Action
Healthcare leaders and AI developers must prioritize AI governance strategies and understand that this goes beyond customary data governance practices. With a “Governance First” framework for AI in healthcare, we can better anticipate and ameliorate the spectrum of AI concerns to build systems that truly enhance patient care while minimizing risks.
Healthcare leaders should champion a patient-centric, governance-first approach and emphasize how this enhances AI’s effectiveness and acceptability in healthcare.
As AI becomes more integrated into healthcare, providers, developers, and regulatory bodies must collaborate to embed robust governance strategies into every stage of AI development and deployment. This approach ensures that AI tools enhance, rather than compromise, patient care.
One proposal to help healthcare organizations and developers prioritize AI use cases would be for a rating or scoring system that provides a common rubric across various AI governance factors. Most enterprises find it challenging to develop and simplify such a rating system while making it interoperable and robust. One helpful resource in this area, the Coalition for Healthcare AI (CHAI), has working groups that have developed such recommendations. Their AI Governance Assurance checkpoints, which integrate clinical risk level assessment and population impact evaluation in five stages of AI development, testing, and deployment, are a trailblazing effort.
Healthcare and business leaders, AI developers, data scientists, and sociotechnical experts must collaborate to prioritize governance, promote explainable AI, and mitigate the spectrum of AI concerns to build systems that enhance patient care while minimizing risks. By implementing an integrated AI governance framework, involving patients in the design process, and focusing on an ethical foundation, we can harness AI’s full potential in healthcare while safeguarding patient trust and safety and ensuring AI tools are equitable and trustworthy.
About the Authors
has been a professional in information technology, telecommunications, cybersecurity, and AI for over two decades. He is known for his thought leadership in IT innovation and business-technology alignment. He is IBM’s Northeast US CTO and Cross-Brand ATL Leader. Matt on LinkedIn
has worked in healthcare research, public health policy and programs, community engagement, health communications, and health tech commercial strategy for over two decades. He is the Chief AI Ethics Officer and Founder of Health-Vision.AI, LLC. Brian on LinkedIn