Responsible Use of AI in Healthcare — Strategies for Balancing Opportunities and Risk in a Speculative Environment
The integration of AI/ML in healthcare, pharma, and life sciences industries introduces a host of critical considerations that require careful consideration.
Social and economic forces, coupled with the competitive landscape of healthcare institutions, are driving immense pressure for organizations to swiftly adopt AI-driven solutions while demonstrating valuable use cases.
The use of AI in healthcare introduces significant opportunities but also profound risks such as data privacy and security, the potential for algorithmic biases, and other ethical and social concerns involving AI. It is crucial to ensure that AI solutions are developed and deployed responsibly, with a focus on mitigating risks and ensuring the safety and equity of its benefits.
Let’s delve into each of these considerations in detail, drawing insights from some of the latest research.
Opportunities Abound
New opportunities for AI in healthcare, pharma, and life sciences are being developed and introduced into the marketplace every day. No-code AI services are the most recent to emerge and have rapidly been adopted for a variety of uses, allowing for bridging between technical and business expertise in developing AI use cases.
Both industry and advocates for responsible AI need to better understand and manage the introduction of these AI-based tools and solutions. To do so, it is important to evaluate the various resource investments needed for AI implementation, conduct an AI impact assessment, and identify the economic opportunities presented by these technological innovations.
Evaluation of AI Investments:
Increased desire for evidence-based policy solutions means we need to design solutions that can effectively address the impact of investments in AI and digital tools. Evaluating AI investments in the healthcare sector involves an assessment of various factors to ensure that the investments align with strategic objectives, deliver value, and adhere to ethical and regulatory standards.
High-quality data is essential to facilitate policy for the value-based use and reimbursement of digital tools, including AI. Evaluations should measure the availability of quality data for AI innovations, perhaps using an indexed approach based upon the scale of the AI deployment and/or interoperability, and correlate these indices with health system care indicators to determine the value-added outcomes generated by AI solutions.
In the healthcare environment, a crucial aspect of measuring the value of AI investments in healthcare is assessing the impact on patient outcomes. Examples include evaluating how AI solutions contribute to improved patient care with improved resource allocation and streamlined processes, enhanced treatment outcomes, reduced medical errors, and overall advancements in healthcare delivery. Patient-reported outcomes include AI solutions that help personalize or tailor care to patient needs, improved diagnostic accuracy and timeliness, and increased patient satisfaction with care and services.
The regulatory environment increasingly plays a role in identifying investments needed and in evaluating the relative effectiveness of investment strategies. For example, the Food and Drug Administration (FDA) regulatory requirements and guidance should be part of the assessment process for AI-based software as a medical device (SaMD) in healthcare delivery.
Impact of AI Solutions on Healthcare and Patient Engagement:
AI has the potential to revolutionize patient care, improve outcomes, and address inequities within healthcare systems. From drug discovery to combatting medication errors, and optimizing efficiency of service provision, AI-driven solutions offer significant transformative potential.
If AI is implemented effectively, it has the potential to provide efficiency in healthcare provider’s time, alleviate healthcare employee fatigue, enhance communication between healthcare provider teams, and improve the efficiency of healthcare delivery.
Incremental improvements that allow HCPs to identify the most important triage decisions that trigger an AI-detected alert, for instance, are “low-hanging fruit” that may help healthcare institutions embrace this change more immediately. AI alerts to HCPs could also allow more timely and better communication between HCP teams in coordinating care and between the care team and the patient.
AI may be used to identify and trigger triage decisions which allow more dynamic and patient-centered thresholds for provider interventions and improve the efficiency of healthcare delivery. AI may also be used to make outreach to patients for follow-up care and tertiary care services more efficient and better documented.
AI has the potential to provide data-driven solutions that can go a long way to help address inequities in healthcare systems.
Economic Opportunities:
The integration of AI in healthcare presents substantial economic opportunities, with the projected growth of the AI market expected to be disproportionately larger in the health sector, compared to other industries. The vast amount of data involved in health and the breadth of opportunities are immense.
The optimization of health systems through AI can lead to more efficient resource allocation, increased preparedness, and resilience to public health threats. This can also help with the prediction and management of healthcare demand, which has institutional and regional population-based health implications.
One immediate application comes to mind- AI can be utilized to develop better predictive models for lifetime patient value.
By analyzing user interactions, behavioral patterns, and healthcare data, AI can help predict and preemptively address various aspects of patient care, including lifetime patient value.
AI-driven predictive analytics can provide valuable insights into patient behavior, preferences, and health conditions, allowing healthcare organizations to refine their approaches, build trust, and foster meaningful connections with patients. Additionally, AI can be used to identify and address barriers to accessing healthcare services, specialty services, and mental health support, ultimately addressing inequities within healthcare systems.
But You Need to Effectively Identify and Manage Risks
Researchers and practitioners point to the need for a guiding framework for AI/ML that focuses on health impact, fairness, ethics, and equity principles. AI in the healthcare sector, used with the appropriate guardrails can equitably benefit diverse populations, including groups from underserved and under-represented communities.
One critical piece of infrastructure for ensuring the responsible use of AI is an integrated governance approach, which can be operationalized through a data governance committee.
AI Models and Data Governance:
A well-designed and resourced governance model can play a pivotal role in promoting responsible data management, ethical AI use, and the protection of patient privacy and security of information. Data governance is essential for ensuring the quality, security, and interoperability of healthcare data for AI solutions.
An integrated governance structure should be inclusive of all models and data used in AI/ML. This includes the need for tools and processes that allow for enterprise-wide data definitions and structure, guardrails for specific data source inclusion, testing and validation, and transparency of operational checkpoints.
A data governance committee can help organizations mitigate risks while managing costs, scope, and speed of innovation.
Institutions can strengthen their approach to digital security by including partnerships across industries and regulatory jurisdictions, enabling them to better understand threats and develop coordinated approaches to prevent and respond to cyber threats.
A governance committee should be multi-disciplinary and needs to include diverse stakeholders, perhaps rotating members to allow dissemination of knowledge and spread of innovation.
This group should include healthcare professionals, government agency representatives, patients and representatives from patient advocacy groups as well as data scientists, and specialists in social and behavioral sciences.
These teams should be involved in the development, approval, and deployment stages of AI-enabled services. Stakeholder engagement throughout the various stages of AI model assessment, testing, deployment, and evaluation is crucial for ensuring alignment with healthcare needs and ethical considerations.
Democratization of Data and Transparency:
Rules for the democratization of data facilitate transparency. These include both how data is shared internally to construct and implement AI models and how data is shared externally to provide access to and assessment of consumer-facing AI tools. This level of transparency is also consistent with the “notice and explanation” provision in the AI Bill of Rights.
Transparency is crucial for building trust and ensuring the ethical and effective application of AI in healthcare. It involves clear communication and thorough documentation. It should also include public engagement about the decisions and trade-offs made in the development and deployment of AI applications. Transparency involves the incorporation of representative data, and the inclusion of patient and consumer input (perhaps using crowdsourcing techniques), in the development and validation process.
Democratizing data with AI and ML involves ensuring that data is accessible, usable, and beneficial to a wide range of stakeholders, including individuals, organizations, and communities.
Data democratization aims to empower users to access, analyze, and derive insights from data, thereby promoting transparency, inclusivity, and informed decision-making.
Developing and utilizing metrics to monitor, characterize, and track data inputs, as well as assessing the effectiveness of existing metrics and controls, is crucial for ensuring the performance and quality of AI systems. Communication of those processes through clear documentation can be shared externally to provide transparency. This also requires public engagement about the decisions and trade-offs made in the development and deployment of AI applications throughout a healthcare system.
For some organizations, this may also involve providing training and educational resources to enhance data literacy and analytical skills, enabling individuals to effectively utilize AI tools for analysis and interpretation. For patients, an example could be AI tools that allow them to visualize the impact of different diagnostic choices on their treatment paths.
Bias Mitigation:
Bias mitigation is a critical consideration for effective AI governance. It involves establishing processes and measures for identifying potential bias in data inputs or outcomes. As stated above, having a multi-disciplinary team overseeing governance is one part of this solution.
In addition to a multi-disciplinary team focused on diversity of thought and professional experiences, it is important to ensure diversity of ethnicity, gender, and sexual orientation in data governance and the AI impact assessment. Another consideration should be ensuring that AI tools are designed to be accessible to people with various disabilities.
Transparency to mitigate bias includes Identifying the guardrails that are used in data governance. For example, what data sets and algorithms are used to test and train AI models? How frequently are they updated? For internal transparency, consistent, reliable, and accurate labeling of datasets for testing is also crucial for ensuring data usability and quality.
A data governance committee should oversee the training and validation of AI algorithms to ensure that they are well represented, avoid systemic or stratification computational imbalance, and incorporate non-medical data such as social determinants of health. Addressing biases and ensuring the representativeness of training datasets are critical for model training and validation.
Proactively conducting equity assessments as part of the AI model and system design is important. This should include the use of data that is representative of the populations to be served by the AI tools.
Integrating the governance model into healthcare systems workflow is important to ensuring that AI applications generate appropriate data, bias has been sufficiently mitigated, transparency exists in decision-making processes, and AI solutions provide utility and value in real-world settings.
The Tradeoff…Is Also Evolving Rapidly
The tradeoff lies in balancing the potential benefits of rapid AI implementation with the need to mitigate risks, ensure regulatory compliance, and maintain stakeholder trust and the safety of patient data.
Rapid implementation of AI in healthcare can lead to early access to innovative diagnostic tools, and improved treatments, allowing healthcare providers to stay at the forefront of technological innovation. This has the potential to lead to improved patient outcomes and enhanced healthcare delivery. Moreover, early adopters of AI in healthcare may gain a competitive advantage relative to others.
Institutions hoping to leverage the promise of AI need to carefully evaluate the options first. A thorough readiness assessment allows for the identification and mitigation of potential risks, including patient safety concerns, ethical implications, and regulatory concerns.
Patient safety should be the top priority when considering the pace of AI adoption.
An assessment of AI options, and setting up the appropriate governance infrastructure may delay an organization’s realization of immediate benefits, but it will go far to mitigate risks and ensure responsible and ethical AI adoption in healthcare.
Luckily there are many government, researcher, and industry voices helping to lead on these issues…
Some important resources include:
Anderson, B. and E. Sutherland (2024), Collective action for responsible AI in health, OECD Artificial Intelligence Papers, №10, OECD Publishing, Paris, https://doi.org/10.1787/f2050177-en
CHAI. Blueprint for trustworthy AI implementation guidance and assurance for healthcare. Version 1.0. APRIL 04, 2023. Coalition for Health AI, The MITRE Corporation and Duke University. https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf
Green, Adam, The great acceleration: CIO perspectives on generative AI. MIT Technology Review Insights and Databricks. July 2023. https://www.technologyreview.com/2023/07/18/1076423/
Green, Brian M. Leveraging AI in online health communities: Considerations for enhancing patient engagement and digital marketing strategy. LinkedIn Article. February 7, 2004. https://www.linkedin.com/pulse/leveraging-ai-online-health-communities-enhancing-patient-green-n79ee/
Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020 Mar 1;27(3):491–497. https://doi.org/10.1093/jamia/ocz192
Ramezani, M., Takian, A., Bakhtiari, A. et al. Research agenda for using artificial intelligence in health governance: interpretive scoping review and framework. BioData Mining 16, 31 (2023). https://doi.org/10.1186/s13040-023-00346-w
Sundberg, L. and Holmström, J. Democratizing artificial intelligence: How no-code AI can leverage machine learning operations. Business Horizons, Volume 66, Issue 6, 2023: 777–788. https://doi.org/10.1016/j.bushor.2023.04.003
WHO. Regulatory considerations on artificial intelligence for health. World Health Organization; 2023. Licence: CC BY-NC-SA 3.0 IGO. https://www.who.int/publications/i/item/9789240078871
WHOSTP. The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. White House Office of Science and Technology Policy. October 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/