Is Artificial Intelligence Good for Our Health?

Ashveena Gajeelee
Berkman Klein Center Collection
7 min readApr 29, 2019

Maintaining agency in a human augmented health system

Photo by Franki Chamaki on Unsplash

Earlier this year, the Health Tech Working Group (HTWG) at the Berkman Klein Center released a paper entitled: “A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance.” The paper examined the challenges in using artificial intelligence to improve healthcare and more specifically in data governance and security.

The ultimate goal of creating technology for health and wellness is to enable patients to live healthier, more meaningful lives. However, the increasing use of mobile technology andAI tools have blurred the distinction between the privacy and individual rights and the extent to which businesses can mine data. Our smartphone has become a gateway to apps that involves AI health coaching, wearable devices that count our calories intake and match exercise required.

The Global GDP will be 14% higher by 2030, amounting to some $15.7 trillion potential contribution due to AI. The countries to benefit the most will be China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact. Canada, Japan, Singapore, China, UAE, Finland, Germany, Denmark, France, the UK, the EU Commission, South Korea, and India have all released national initiatives on their strategy to develop and optimize the use of AI. These figures are sufficient to demonstrate the potential of the data economy.

The paper brings together an interdisciplinary team’s approach to health technologies and provides a holistic approach to exploring three areas of particular interest:

  • user-centered design — how we can encourage individuals to choose health coaching and treatment based on their preferences and their very personal and individual understanding of what it means to lead a good life
  • how increasing access to these technologies and related health data will advance user understanding and interest in personal health status and promote intrinsically-motivated health behavior change
  • the ethical dimensions at play in developing effective AI-enabled behavior change tools (“digital health nudging”)

The paper raises issues around how to build in transparency and accountability to make sure that the systems are working in the best interest of the users. It offers a rare transdisciplinary systemic approach that is critical to seize up the implications of using AI as a digital tool to improve healthcare.

Transdisciplinary approaches are necessary to:

- understand how a problem is embedded in societal practices and challenges

- can result in new medical and therapeutic capabilities and identify the next frontier of mental and physical health research needs;

- produce insights and guidance regarding the ethical issues related to this technology that may facilitate the development of ethical framework, as well as legal and policy recommendations

While reflecting on the use of AI in healthcare apps, the paper puts forward several questions which relate to what defines a healthy life and how to design an ethical framework for a healthy life. AI-enabled technology has the potential to collect and provide a significant amount of data about the user practices and behaviors as they relate to their physical and mental health and well-being. The goal of AI-enabled health technology can hence be defined to help individuals lead lives that are healthier and that match their values.

There are significant benefits to AI enabled health app which range from: the ability to collect accurate data regarding one’s everyday health and behavior in real time; an AI-mediator that continuously engages in an open dialogue with the user in order to define his or her well-being related goals; consideration of users’ values reflected by the coach might be more effective in motivating the user to follow wellbeing advice; and potentially maintaining users agency to make decisions which fit their overall goals regarding their individual well-being. However, such targets may remain unattainable given the challenge of a ‘one size fits all’ guidance for wellbeing.

While examining these challenges, the paper addresses two core elements of AI use in healthcare. Data is at the core of all AI applications and involves a wide range of issues around collection, standardization, ownership, and consent. Most of the conversations around healthcare data only refer to traditional sources such as electronic health records (EHRs), the HTWG includes in their discussion well-being data from fitness devices and other applications that can be used to assess or impact health and well-being.

Well-being data such as sleep pattern, motion and heart rate from wearables is not covered by the same privacy protections as medical records but can be used to infer things about patients that, if they were stored in the patient’s EHR, would be covered by the Health Insurance Portability and Accountability Act (HIPAA). The paper questions whether HIPAA’s scope should be expanded in order to include those new data sets instead of a separate regime for health-relevant data that is not covered by HIPAA. The paper makes a case for standardized terminology, open interface protocols, and adequate metadata. In addition, user data may be divided more generally into three categories: non-sensitive, sensitive, and ‘grey area’, aimed at giving the user/patient agency over how their personal data is used to the greatest extent. There is a great need for work on making data user agreements clear and comprehensible to participants, and for applying protections to sensitive data more broadly than just to traditional health records.

Behavioral economists Cass Sunstein and Richard Thaler first coined the term “nudging,” to describe a method of promoting one behavioral choice over another (or several others) while still permitting personal autonomy. Nudging alters people’s behavior in a predictable way without forbidding any options,” and thus does not impact the actor’s ability to choose but rather, the direction, valence, and likelihood of a given choice.

Nudging is inextricably tied to issues of power and autonomy — and thus, ethics. Some questions that arise within the paper are: who defines and decides the “right” end goal, particularly when a “healthy” goal can take many forms? Who is allowed to make changes to the environment that encourage certain choices? The term “digital nudging” emerged only recently in engineering and computer systems literature, and is defined as the “use of user-interface design elements to guide people’s behavior in digital choice environments.” Digital nudges can be personalized and driven by AI assistants or can be programmed into applications so they are seen the same way by all users.

Nudging is, by design, not obvious to the person being nudged. Thus, it is difficult or impossible to ignore or filter. In physical environments, it is not possible for individuals to opt out of nudging. Like advertising, nudging aims to change behaviors. Ethical nudging should respect the individual’s autonomy and choices and support their goals. In healthcare, nudging could help people to meet the health and wellness goals they have identified and agreed to. One key difference between nudging in the physical world and a digital nudge is that — at least, theoretically — it is possible to opt-in or explicitly consent to a nudging system through personally-owned devices like web browsers or phones. As our experience of the world is increasingly mediated through technology, the paper expresses concern about how digital nudging might distort individual experiences to achieve societal ends.

Furthermore, transparency in design can allow users to learn about nudging in general and concrete nudges being applied to them. When a person uses technology, a phone or other computer, it should allow the user to ask for information about the nudging happening, give them control over the amount and methods of nudging.

The Health Tech Working Group has attempted to distill the essential issues and questions from a wide-ranging discussion among a group with very different research interests and hopes that this research agenda will be of interest to the broader community.

- How can we design an AI-enabled health coach that can engage the user in an open dialog about his/her short-term and long- term goals?

- Which ethical concerns should be in focus in the development and use of AI-enabled health technology and how can an ethical framework be constructed to guide policymakers and other stakeholders?

- How can the veracity and accuracy of personalized recommendations (made through AI-enabled health technology) be assessed meaningfully and presented to consumers?

- How can we optimize and mitigate macro effects (i.e., public health) of AI-enabled health technology?

- What data governance ecosystem allows for privacy, legal clarity and innovation?

- How can innovation and offerings on the health intervention marketplace be governed at the national and international level?

- How do we design nudging systems that are transparent, protect individual agency and promote achieving one’s goals?

In the future, AI may participate in patient care as a member of the clinical team. Team communication and human interaction are complex — an art that is learned over the course of medical training and practice. If an AI assistant is to participate as a health care team member, it must be trained to understand the clinical workflow so as to avoid interrupting at critical moments or distracting other caregivers; knowledge of this kind requires a sense of proportionality (such that the AI can judge when a medical event important enough to justify an interruption occurs) as well as “social skills” so that an AI can communicate effectively while respecting the skills and judgment of the other team members.

Tools must ensure that data is not accessed and utilized by third parties without user consent. Allowing health care providers (government, employers, or insurance companies) access to this detailed data about user behavior might result in ethically problematic health policies. Even the option of disclosing one’s data in return of lower premium could function as an ethically questionable incentive if it results in non-disclosure being “punished.”

Given the heavy reliance on data sets for AI solutions and that only a few select countries have embarked in the AI race, societies need (both national and international) safeguards to mitigate biases in data collection, prevent discrimination and generalization of AI technologies and assure equitable access to data. Those safeguards need to factor in cultural context specificities to ensure AI norms and standards are not imposed by a select few on the rest of the world’s population.

Summarized by Ashveena Gajeelee, Fellow, Global Access in Action, Berkman Klein Center for Internet and Society, Harvard University. @ashgajeelee

Read the full paper here.

--

--

Ashveena Gajeelee
Berkman Klein Center Collection

Global Health | Health Technology | Regulatory & Policy Framework