Digitally Designed Care

An Analysis of AI Therapy Chatbots

Valerie
Design Ethics
7 min readJun 3, 2024

--

Would you tell your secrets to a robot?

Artificial Intelligence (AI) based therapy chatbots represent the intersection of a variety of fields and techniques from computer science and psychology. Large Language Models (LLMs) are computer programs that use data in the form of text-based inputs to detect patterns and generate natural language textual responses to user inputs. AI therapy chatbots are an application of LLMs designed to provide therapeutic mental health care in the place of a human therapist. A patient can text the “therapist,” and receive texts back that aim to appear like another human, but with humanly impossible specialized intelligence, responding to them.

ChatGPT, an LLM, explains itself

In the pursuit of accessibility and decreasing cost to provide quality mental and emotional health resources, important questions about humanity, security, form and function, and design arise. Conversational AI has great potential to help vulnerable groups, and today, with therapists of variable quality and not enough providers, increasing accessibility and decreasing cost to users with AI-augmented care shows promise. However, when analyzed critically through the lens of feminist ethics, AI therapy chatbots are unethical.

How did this all begin?

Even from the beginning, AI chatbots have had an interpersonal focus that blurs the relationship between human and robot..

ELIZA is regarded as the first chatbot, a natural language processing computer program developed by Joseph Weizenbaum in 1966. ELIZA functioned as we imagine typical chatbots would, simulating conversation with a human by using pattern matching to determine responses. The script ELIZA responded to users with was called DOCTOR, which was modeled after person-centered psychotherapy. This patient-led form of talk therapy emphasizes the autonomy of the patient and places the therapist in a non-directive role. The therapist isn’t trying to change the patient’s thought patterns or interpret their emotions or actions, instead, the patient is assumed to have all the power and ability to self-realize and facilitate change. A supportive and private space with an empathetic and unconditional therapist is central, rather than providing education, tools, or training, in contrast to other forms of therapy like Dialectical Behavior Therapy, which focuses on building skills in areas like mindfulness and emotional regulation. ELIZA had preprogrammed responses and, as ‘natural language processing’ implies, could process and reflect users’ responses back to them and add questions and prompts, creating an illusion of human-like empathy and intelligence. This brings to question what it means to be human, intelligent, and capable of care. Despite the believability in basic conversing, computers cannot fully understand complex natural language, only apply mathematical likelihoods and statistics, not quite at the ease of a real human.

Model of a conversation with ELIZA

What is feminist ethics?

Given the goal of providing care in therapy, feminist ethics is a relevant framework to analyze AI therapy chatbots. Feminist ethics emphasizes care as an action and emotion, bringing important concerns to the table in the digitalization of mental health care. It encourages us to analyze relationships, power structures, and social systems that design exists within.

Emphasizing community and interdependency implores us to question what we lose with a shift from human to technological, and what it means to create fully virtualized experiences.

Can I trust you?

Under feminist ethics, the existence of trust in the relationship between robot and human is brought into question. For care to commence in the traditional therapeutic model, there needs to be trust between the patient and provider. This trust is not only found in a legally sound environment, but also in an emotionally safe space. With AI therapy bots, the sanctity of both of these elements is unclear. Especially in comparison to the standards human providers in the field of psychology are held to today, there is great concern with the storage and anonymization of data. Private AI companies have no mandated ethical obligations, unlike human providers. Human providers, under the Health Insurance Portability and Accountability Act (HIPAA), must take great care to anonymize data, and store it in appropriate and secure ways. With AI, there is less clarity and security. Given the way data functions within AI systems, data the user does not directly share with the bot can be deduced and included in their profile or future interactions, unbeknownst to them. Data can also be reused and transformed with the ‘black boxes’ of algorithms, effectively obscuring where the user’s data is, and possibly even using it in ways unknown to developers. So, even if users are willing to share specific information, there can be no full assurance of data protection.

Example of a HIPAA compliance checklist

Non-privatized data or data deemed unimportant, seen as ‘data exhaust,’ can also still be sold and used in inappropriate instances like advertising. In 2023, the FTC found that BetterHelp, an online therapy platform (not AI-assisted), was sharing “the health information of over 7 million consumers with platforms like Facebook, Snapchat, Criteo, and Pinterest for the purpose of advertising.” Feminist ethics more explicitly demands transparency from companies on where and what data is stored and how it is used, to better serve the populations they claim to care for.

The fantasy that AI chatbots create in their emulation of humanity requires further regulation and responsibility. Companies nudge responsibility onto users seeking care, claiming that disclaimers and terms of service are sufficient, but feminist ethics says that care and empathy should be prioritized, meeting users where they are and respecting their experiences.

Note the disclaimer at the bottom, despite the impression of honesty from the therapist

Are you impartial and non-discriminatory?

Systems like AI, driven by data and reduced to numbers, also risk encoding social systems and perpetuating further discrimination against marginalized groups. As described in Ruha Benjamin’s book Race After Technology, technology can be racist, through how “racism structures the social and technical components of design.”

The large data sets that AI depends on “are rife with racial (and economic and gendered) biases,” leading to robots reflecting “deeply ingrained cultural prejudices and structural hierarchies.” (Race After Technology, page 59)

Institutional racism and structural inequity can be encoded into AI therapy chatbots by their data, developers, or users, negatively impacting society and failing to care for all equally. Racism need not be driven by explicitly racist intentions or prejudices, which many claim chatbots are incapable of holding. Even with no intention of racism or discrimination, the racist and offensive outcomes that technology enables are still problematic. Continuously allowing companies to shield themselves from responsibility by claiming no ill intent does not solve the inequities.

Similarly, Invisible Women by Caroline Criado-Perez details the hidden sexism within data, leading to physical infrastructure and technology that fails to serve women sufficiently. Historical biases against women are encoded in data, and a cycle is perpetuated with technology based on this data and design done without women present at the table. When Siri, Apple’s AI assistant, was first launched, she didn’t know what rape meant. The implications of omissions like this for applications in therapy are only compounded by the concerning use of gendering in personifying AI therapists. It has been seen that people do verbally abuse chatbots, and the extent of the abuse depends on the chatbot’s gender. Without the social obligations of respect and responsiveness present in treatment with a human therapist, the consequences of this virtual abuse are obscured to the patient, but can still perpetuate harm.

Do you know what I mean?

These cultural and societal factors like racism and sexism impede the ability of AI therapy chatbots to provide patients with diverse backgrounds and identities sufficient care. For a safe space in therapy to be established, both parties must share a reality to a certain extent. A human therapist, with personal relationships and experiences, provides a real-world basis for care and empathy. This real-world understanding is also informed by personal observations and formal education about existing social structures. An AI model is not capable of sharing this reality.

An AI therapist can never truly relate to the discrimination a human faces in the world, despite being able to falsely display understanding and empathy.

With no actual conscience or emotions, AI therapy chatbots blur the relationship between provider and patient. Feminist ethics allows us to explore the innate falsehood of humanity in an AI model, and what implications this has for the role of the therapist and the shared reality with the patient.

So what?

Given the concerns brought up by feminist ethics, I propose companies prioritize data transparency to facilitate trust, consciously work against the prejudices of social systems, and confront the power dynamics present in a digitized world. In an ever-evolving technological landscape, with increasing public access and usage of AI systems, important ethical considerations are warranted in providing care. Feminist ethics provides an intersectional, action-oriented, humanitarian approach to AI therapy chatbots, given their inevitable development and growth.

--

--