Photo by DeepMind on Unsplash

Beyond the Hype: Navigating the Ethical Challenges of AI in Public Health

Shantelle Smith
Palindrome Data
Published in
6 min readMay 2, 2023

--

As the use and capabilities of artificial intelligence (AI) continue to expand across various sectors, charitable causes and resource-constrained industries are harnessing its power to achieve their goals. Public healthcare is one such sector that benefits from the potential of AI. The United Nations’ Sustainable Development Goal 3 which seeks to “ensure healthy lives and promote well-being for all at all ages,” has put governments and non-profit organisations under pressure to improve healthcare outcomes. In low- and middle-income countries, this has necessitated the use of innovative approaches to tackle weaknesses in the healthcare system, with technology-based solutions such as AI which offer promising possibilities.

Despite the numerous benefits of AI, its rapid growth has resulted in its inadvertent unethical use, where hasty implementation of solutions takes precedence over ethical considerations. To rebalance the scales, it is crucial to adopt responsible AI practices in public health to ensure the benefits are maximised while minimising any negative impacts.

Responsibly Using AI in public health

A sequence of five circles that are annotated with the five facets of responsible AI — Human Impact, Enviro Impact, AI for Good, Human-Machine Team, and Govern

There are various facets of responsibly using AI, including:

  1. Ensuring no negative impact on human beings
    – Ensuring ethical use of their data
    – Responsibly applying a solution that is fair, robust, explainable, and transparent
  2. Ensuring a negligible (or positive) impact on the environment
    – Acknowledging and minimising the carbon footprint of computing, business processes, and product design and implementation
    – Applying knowledge to climate-based problems
    – Education of employees through marketing or product design
    – Using AI to improve human or other living beings’ lives
  3. Enabling human-machine teaming (i.e., encouraging collaboration and trust between users, engineers, and any AI-based solution), where possible and preferable
  4. Governance (mechanism of enforcing responsible practices)

In public healthcare, there are common characteristics of data collection, data storage, data processing, and model training where AI practices have been modified to reflect the dedication to responsible AI. Palindrome Data is a prime example of a team that is dedicated to ethically using AI in its endeavour to create point-of-care AI-based solutions that triage patients based on risk.

Diagram from Sikstrom et al. (2022) showing their three pillars of fairness: Impartiality, Transparency, and Inclusion
Sikstrom et al. (2022)

Six Responsible AI Practices

Palindrome Data strives to encapsulate responsibility in the data science process into six principles:

  1. Patient Privacy
    Healthcare data is personal and many, understandably, view the use of this sensitive information as invasive. In achieving a more ethical process, Palindrome does not store any personally identifiable information (PII), such as patients’ names, clinic folder numbers, and government-issued ID numbers. In the case of AI used in client-facing products, PII is not exported from the product’s database and is not stored for use other than for use at the point of care. Regardless, patients should maintain the right to choose and be given the option to opt out of any use of their PII data. Patients should also have the right to choose whether we may use an AI-based solution in their consultation and whether the solution may be used to make decisions on the patient’s treatment or therapy plan. As Palindrome scales up its solution, incorporating this concept into the product’s workflow will be essential. However, this presents challenges in the existing data pipeline with Palindrome likely dependent on NPOs that are implementing programmes in primary care facilities as the party responsible for collecting this consent. Currently, this is achievable through ethical data use practices between NPOs and Palindrome but Palindrome’s long-term plan would be to incorporate a consent management platform into its solution.
  2. Data Sharing and Storage
    Despite the collection of a patient’s consent, data must still be shared and stored responsibly to maintain the confidentiality and integrity of the data while still allowing for collaboration. Currently, data-sharing procedures are not always secure in the public healthcare industry as data is still regularly shared via email in a spreadsheet. Palindrome uses secure file shares through Azure Storage to ensure that sensitive data obtained from clients is transferred securely and without risk of unauthorized access or data loss. The Azure File Share URL and shared access signature are sent securely to clients with LastPass. Additionally, Palindrome has a data security plan which ensures data is protected and only used for its proposed purpose and only kept for as long as it is needed.
  3. Ethics of the Solution
    All stakeholders need to critically consider whether it is ethical to use any AI to address a certain public health issue or whether an aspect, such as data source or implementation method, could be adapted to aid the ethical use of AI in the proposed context. In an implementation, the decision-making that may come out of a product may be more ethical with a human-machine teaming design instead of a prescriptive design where the human user is pushed to accept the consequences of the product’s output. Palindrome has incorporated this principle into its product design through user testing with nurses and other healthcare workers, non-profit organisation (NPO) program managers, and academic collaborators.
  4. Fairness
    Negative bias in data collection, processing, and modelling can create an unfair AI-based solution. One common source of bias in Palindrome’s use case is imbalanced data, where the training data is skewed towards one group or outcome, leading to unequal representation and inaccurate predictions for underrepresented groups (see more on the harms of allocation and representation).
    In data collection, this may be unavoidable given the system that is set up to collect the data but it can be addressed in data processing. For example, Palindrome data scientists use resampling or reweighting techniques to better represent minority groups or apply constraints such that models do not make decisions based on protected features, such as race or sexual orientation.
    Measuring the fairness of the dataset and data science process is crucial and possible through metrics, such as demographic parity (the rate of positive outcomes across different demographic groups), disparate impact measure (the ratio of positive outcomes in one group compared to another), and the difference in mean prediction in subgroups. These can be used and optimised using packages like aif360. Palindrome still needs to implement these tests into its data science pipelines to ensure fairness is adhered to.
  5. Explainability
    AI models must be developed as simply as possible to explain the model’s rationale and decision-making to the end user. For example, Palindrome favours simpler models from logistic regression to various ensemble models over the peak performance of deep neural networks, which are not used unless it is the only way to solve a problem. In terms of point-of-care products, Palindrome attempts the simplest models first and progressively adds complexity if the performance does not meet the required target.
    However, it is not enough to develop ‘simple’ models; thorough documentation of the development process is also essential. Documenting the ethical approach to the AI solution has recently been made a seamless part of the Palindrome process through a simple deon-generated checklist. Documentation ensures that the technical process is transparent and can be audited, leading to greater overall explainability of the end solution.
  6. Robustness
    Palindrome creates robust solutions and highlights the weaknesses of its systems. A robust system should not be substantially affected by an ‘unusual’ case. As data inherently contains some bias, even if only due to it being sourced from a specific geographical location, the constraints of the solution are clearly communicated.

AI offers tremendous potential for improving public health outcomes, especially in low-resource countries. However, responsible AI practices must be implemented to ensure the benefits outweigh any negative impacts. Palindrome Data’s six principles for responsible AI in public health provide a framework for implementing responsible AI practices in public healthcare. By following these principles, we can ensure that AI is used to improve human lives while minimizing any negative impact on society and the environment. However, filling in gaps and fully complying with all principles is a work in progress but needs to be prioritised in the journey to implement AI-based solutions in public health. Watch this space as Palindrome strives to not only make technology-driven improvements to healthcare programs but also ensure that responsible AI practices are front and centre in their tech startup.

References

Sikstrom, L., Maslej, M.M., Hui, K., Findlay, Z., Buchman, D.Z. and Hill, S.L., 2022. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ health & care informatics, 29(1). doi: 10.1136/bmjhci-2021–100459

--

--