Chatbot Therapy

Is artificial empathy possible?

Bjørnar Winnberg
Convertelligence
5 min readMar 25, 2019

--

The use of mental health and wellness applications on smartphones or other portable devices is increasing. (Photo by Siv Fjørtoft)

Many have wondered what makes us human — what, exactly, differentiates us from our bestial brethren, and, more recently, our mechanical creations? In comparison to our animal counterparts, many argue our ability to reason is what makes us unique. However, in the last 100 years, we have passed on our tendency to use logic and reason onto our artificially intelligent machines.

Artificially intelligent mecha-beings surround us. Siri; Alexa; Cortana; Google Assistant; these programs can absorb our data output (i.e. our vocal speech, our web searches, our app uses, our text messages, our online purchases, etc.) to shape our digital experiences. Bots exist for a host of uses and can be programmed to perform even the most complex tasks. Advancements in technology have enabled our mecha-golems to learn. Learning, at its core, is simply adapting to knowledge gained through experience.

Our world is quickly advancing in a way that is reducing our human contact in some ways; everything is becoming automated. Adapting to this trend, Google has developed an AI feature known as Duplex¹. It is essentially an adaptive conversational agent created to mimic human sentience. The complex algorithms can aptly predict what song to play next on our music streaming devices or what our next purchases should be. They can also be programmed to match our emotional states through interpretation of our diction, intonation, and even our rate of speech.

Many argue that human brains are essentially biological computers. We receive input from a combination of external and internal stimuli, we assess the data collected, we interpret it, then perform based upon our attributions. A friend may say they are emotionally “OK”, but if the words are spoken with a quiver in their voice, and a solemn expression painted onto their face, etc., we may know they are not OK. This voice and facial recognition is readily available in our current technology, which begs the question, can we use these features in addition to adaptive learning to create a sentient therapeutic chatbot?

It is frequently argued that human empathy is purely experiential; that it is learned. Many therapists, psychologists, and psychiatrists attend advanced studies in order to hone in on this notion. At its most basic, therapy may be described as expert listening with empathy. Mental health professionals essentially reflect, summarize, and dissect what the client is saying by the words they speak and the way in which said words are spoken. Is it then feasible that an AI chatbot can adapt to the client’s tone, identifying underlying emotions through word choice in order to offer therapy to those who suffer from mental health issues?

There is a shortage of mental health professionals in many areas of the world, and even those who do have access to care can only realistically receive limited interventions. But imagine: A therapeutic chatbot equipped with sentience technology readily available 24/7, whereas a therapist is available for only one hour 1 to 3 times a week.

Current literature supports the increasing use of mental health and wellness applications on smartphones or other portable devices. This can be a monumental step towards making mental health services more available for the masses. Developing the right AI technology can essentially be like having a therapist in one’s pocket. The algorithms can not only scan and assess direct verbal interactions, but can evaluate patient conversations with third parties; through phone calls or texts, or even searches. It may be programmed to identify key red-flag words such as “suicide”, “depressed”, or any other negative-emotive words and initiate a session request. Arguably, this smart technology is needed as many patients will not readily seek out the therapy chatbot during a depressive episode. Such check-in and/or follow-up prompts could be as simple as “Hello, [name]. How are you feeling today?” This initiation of a therapeutic conversation can be highly impactful, especially if a patient is isolating himself/herself.

Of course, this AI technology raises ethical concerns. Confidentiality is the most important feature of mental health interventions. All technology has the risk of being hacked or breached, and this is a prime reason there is a hesitance for even human mental health professionals to provide treatment via video calls, on the phone, via email, etc. In treatment, transparency is a key aspect of facilitating rapport, gaining trust, and producing a healthy therapeutic environment. Having a bot that is always listening, may be a little off-putting, as going at the client’s pace based on what they voluntarily disclose is integral to a successful intervention.

Carl Rogers, founder of Person-Centered Therapy and a world-renowned psychologist, believed “a self-directed growth process would follow the provision and reception of a particular kind of relationship characterized by genuineness, non-judgmental caring, and empathy”². Most theoretical orientations emphasize the importance of unconditional positive regard (objectivity and no judgment), empathy (ability to understand and relate to the client on a deep level), and congruence (genuineness) during the therapeutic process. These are in direct opposition to artificial programming, as they emphasize the importance of identifying with another human being on an elemental level. They are conveyed through dialogue, body language, and verbal cues.

Theoretically, a chatbot can potentially use dialogue alone to convey these features, however, they would always be disingenuous. But, does that actually matter? The way the tech-world is rapidly expanding, there is no doubt a bot can be programmed to be an elite and sentient therapist. However, the true question is: Will human clients ever be able to bond with or develop sufficient rapport with a programmed AI? Ay, there’s the rub.

[1] CBInsights (2019). What’s Next In AI? Artificial Intelligence Trends: 2019.
https://www.cbinsights.com/reports/CB-Insights_AI-Trends-2019.pdf?utm_campaign=ai-trends_2019-01&utm_medium=email&_hsenc=p2ANqtz--pjF0v8PNgh07ZvKKjkeK9CDmmEywyW55MkApqztKS47S-I-T0rx5beDAWf3EdXkGejSGBrXyy1vcTbPUzTOHNYxUwdU1iUUYlIrNRA9gQuuls14I&_hsmi=69442461&utm_content=69442461&utm_source=hs_email&hsCtaTracking=e25106c1-5e5f-429f-b049-4e814653d7cc%7C6a1b451e-9e33-4ff2-b744-91339b2b856b

[2] Thomas, J. C., & Segal, D. L. (Eds.). (2006). Comprehensive handbook of personality and psychopathology, personality and everyday functioning (Vol. 1). Hoboken, NJ: John Wiley & Sons Inc.

Convertelligence is now known as Kindly. Visit our website to see what else is new!

--

--