Affective Computing: When AI Tries to be Your Therapist

Eunicecho
WRIT340EconFall2022
7 min readDec 5, 2022
Photo by DeepMind on Unsplash

Type “depression” or “insomnia” into an app store, and you will get a dizzying list of results. Thousands of “wellness” apps like Headspace advise people on breathing exercises and other techniques to help with mindfulness. In addition, apps like Woebot and TalkLife claim to relieve anxiety and depression with games, mood tracking, social networking, or AI chatbots.

According to the National Institute of Mental Health, only about half of adults with mental illness in the United States receive treatment due to an understaffed mental healthcare system (National Institute of Mental Health). Moreover, according to the medical journal Lancet, global cases of depression and anxiety skyrocketed by more than 25% during the 2020 pandemic (2020). Rarely has a more urgent need to help people cope with mental health issues. For tech startups looking to capitalize on unmet needs, this lurking structural issue equates to over 50 million potential customers. Along with the increased use of online services, mental health apps with affective computing technology are booming. The growing technology aims to detect and interpret human emotions using sensors, voice and sentiment analysis programs, computer vision, and Machine Learning tools (Gold, 2021). Forecasted to grow into a $37 billion industry by 2026, the future of mental health treatment may shift predominantly to rely on bots and wearables (Crawford, 2021).

However, the increase in mental illness during the COVID-19 pandemic has loosened FDA regulatory oversight of digital therapy and accelerated the approval of services without peer-reviewed research (Royer, 2021). They were theoretically hoping to increase service for patients with depression, anxiety, obsessive-compulsive disorder, and insomnia during a period where in-person services were limited and potentially unsafe. Nevertheless, a lack of proper regulation for digital tools has endangered mental healthcare practice, which deals with highly vulnerable personal data that can significantly impact an individual’s mental well-being. Traditionally, medical devices must be tested, validated, and recertified after software changes that could impact safety and accuracy due to the high stakes. However, alarmingly, digital devices tend to get soft treatment from the FDA. “Wellness” apps that promote a healthy lifestyle or assist people in managing their disease without making specific treatment recommendations may be exempt from FDA regulation. This means that apps are not required to provide corrections and removal notifications or registration requirements. They reasoned that these digital devices could improve the mental health and well-being of patients with psychiatric conditions during isolation and quarantine and alleviate hospital burdens (FDA). Dr. Tom Insel, a psychiatrist and neuroscientist, claims that these apps focus too much on the problem of access without checking the critical question of quality. Many experts supporting the potential for innovation in mental health treatment acknowledge that consumers receive little guidance on selecting a reputable option.

Startups and corporations collaborate to develop technology that can predict and model human emotions for clinical therapies. Some apps are meant to be used with personal therapy, others on their own. For example, app-based chatbots, such as Woebot, use emotional artificial intelligence to simulate the guidelines of cognitive behavioral therapy, a standard method for treating depression, and to provide sleep, worry, and stress advice. Some of the most popular, like Talkspace, BetterHelp, and Ginger, promise access to treatment with a licensed therapist via text, phone, or video (Skibba, 2021). Most products make money by charging consumers a monthly or yearly fee, with the option to purchase extras such as video sessions with a therapist.

Despite the hustle to develop applications using it, automated computing for mental health is still in its early life and has been introduced as a panacea without scientific validation. However, companies like Woebot are marketing their service as clinically validated based on a study conducted by researchers who have financial ties with the company and proposing that AI bots can make meaningful connections with users like that in therapist-patient relationships (Skibba, 2021). Nevertheless, scientists disagree on to which extent technology should be used for dealing with human experiences. For example, Johannes Eichstaedt, a psychologist at Stanford University, claims that “AI can serve as a screening and early warning system” but with a C grade for accuracy in current detection systems (Skibba, 2021). One of the issues with the algorithms that Eichstaedt and others are building is that they monitor a series of facial expressions or words that are only blurred indicators of a person’s inner experience. Eichstaedt says it is similar to a doctor identifying noticeable symptoms but not understanding what disease is causing them. However, the industry has primarily neglected this uncertainty as they rush to profit from healthcare digitization.

One of the main concerns of affective computing in mental healthcare is that the technology cannot encompass the broad range of human emotional experiences and continuously reflects developers’ biases. Researchers Ruth Aylett and Ana Paiva claim that affective computing requires quantifying qualitative relationships, which entails a clear preference between competing alternatives and mapping structures internal to software units (2012, as cited by Royer, 2021). Verbal expressions of emotional states using chatbots, voice inflections, or gestures for apps using wearable digital devices vary from population to population, and affective computing systems struggle to capture diverse human emotional experiences (Royer, 2021). In such a way, affective computing requires that qualitative relationships be substantiated, an evident selection be made among alternative values, and critical components be plotted onto application entities. When programmers code qualitative emotions into computer data, they use emotion configurations with flimsy specifications. As a result, emotions are not strictly quantifiable, and the indicators generated by such software are only an educated best estimate. However, only some developers are aware of the severe limitations in their process.

Related to the design of AI systems, the unequal participation of end users from different backgrounds is a significant source of algorithmic bias and unfairness. In design research, recruiters frequently use findings from technologically-savvy individuals who claim to be early adopters. However, products do not reflect minorities’ needs unless design processes are diverse and inclusive. Recent assessments of AI ethics have revealed a lack of reinforcement mechanisms and consequences for ethical violations (Vilaza & McCashin, 2021). For example, compared to white people who were equally ill, the algorithm was less likely to recommend black people to programs to support care for patients with advanced healthcare issues (Science, 2019). Therefore, the data used to develop the product may differ from the target groups (Vilaza & McCashin, 2021). In the case of chatbots, a lack of consideration for justice in the production and use of language models leads to racist, sexist and discriminatory dialogues (Crawford, 2021).

Today, software developers are exempt from specifying the various Artificial Intelligence Machine Learning-based techniques that power application systems. Using workplace software or wearable monitors to detect depression can cost our jobs or lead to higher insurance premiums (Royer, 2021). BetterHelp and Talkspace, two counseling apps that connect users with licensed therapists, were found to share sensitive information about users’ mental health histories, sexual orientation, and suicidal thoughts with third parties (Gold, 2021). Moreover, consumers can stay unaware as their insomnia app uses mood analytics to track users in the background. Fitbit, for example, has now added stress management to its devices (Royer, 2021). Nevertheless, few of us know where this data goes or what it does.

Aside from the privacy and efficacy concerns, we need to evaluate how digital solutions can create new inequities in service delivery rather than addressing the lack of mental health resources. For example, digital devices designed to help with emotion regulation, like the MUSE headband and the Apollo Neuro Band, cost $250 and $349, respectively (The Economist, 2021). Individuals are therefore encouraged to seek self-treatment through cheaper guided meditation and bot-based conversational applications. Unfortunately, even among smartphone-based services, many are hidden behind paywalls and high subscription fees to access all content. If the accessibility across diverse user groups is left unaddressed, those who cannot afford in-person therapy will have to rely predominantly on chatbots of questionable quality.

Moreover, there are significant differences in access to technology and digital literacy in the United States. This limits users’ ability to make informed health decisions and consent to using their sensitive data while having access to do so much using their phones. Because digital technologies are affordable and scalable in comparison to in-person healthcare, specific communities may have to rely on these lower levels of mental health services (The Economist, 2021). Such trends also shift the responsibility for mental health to users rather than caregivers or structural systems. Government regulation should at least push companies to be transparent: we need the technical sophistication and the scientific confidence to guarantee the effectiveness of such digital solutions to deal with the mental health crisis.

Treating mental health and behavioral issues is becoming a profitable business venture for developers with no prior therapeutic services certification. Regulations should push companies to compete on privacy, safety, or evidence instead of aesthetics, page ranking, and brand awareness. A collective effort from public awareness to governmental oversight is needed to ensure that these digital tools do more good than harm to users’ wellness.

References

Crawford, K. (2021, April 6). Time to regulate AI that interprets human emotions. Nature News. https://www.nature.com/articles/d41586-021-00868-5

Gold, J. (2021, June 21). In a murky sea of mental health apps, consumers left Adrift. California Healthline. https://californiahealthline.org/news/article/mental-health-apps-tech-startups-cognitive-behavioral-therapy-saturated-market-unregulated/

Skibba, R. (2021, June 4). The computer will see you now: Is your therapy session about to be automated? The Guardian. https://www.theguardian.com/us-news/2021/jun/04/therapy-session-artificial-intelligence-doctors-automated

The Economist Newspaper Writer. (2021). Dramatic growth in mental-health apps has created a risky industry. The Economist. https://www.economist.com/business/2021/12/11/dramatic-growth-in-mental-health-apps-has-created-a-risky-industry

Royer, A. (2021, October 14). The wellness industry’s risky embrace of AI-driven mental health care. Techstream. https://www.brookings.edu/techstream/the-wellness-industrys-risky-embrace-of-ai-driven-mental-health-care/

Vilaza, G. N., & McCashin, D. (2021). Is the automation of digital mental health ethical? applying an ethical framework to chatbots for cognitive behaviour therapy. Frontiers. https://www.frontiersin.org/articles/10.3389/fdgth.2021.689736/full

--

--