They don’t know us so well, yet: chatbots give flat advice for mood management

Alexandra Hosszu
Artificial Mirror
Published in
6 min readApr 15, 2020

Developing skills for emotional balance and wellbeing has become a major goal in our lives. During pandemics and otherwise, we are expecting to live significant lives and perform in all fields, from professional careers to cleaning, cooking, taking care of body and mind, socializing, traveling, discovering true spirit, taking care of family. The pressure of perfect selves and lives sometimes becomes a heavy burden that we need to alleviate, lest we become run down with anxiety or depression. Still, access to therapy is a privilege around globe: the public expenditures on mental health are extremely limited and there is a huge lack of mental health workers [1].

Living in a “app culture” [2], entrepreneurs identified an empty market on mental health issues which lead to the development of approx. 10,000 mental health related apps [3], including apps for depression, mood management, emotional support, mood trackers. One for sure will find an app matching her or his curiosity.

Among these apps, there are chatbots to talk to about emotional issues in therapeutic dialogue. Bots simulate a conversation between a patient and an expert figure that teaches us how to understand and deal with worries and negative thoughts. The scientific base of this approach is Cognitive Behavioral Therapy [4] which guides patients in changing their negative or unhelpful thoughts into more useful ones, improving one’s capacity of dealing with different emotions.

Two of the most appreciated and used chatbots for mental health are Wysa and Woebot [5].

There are several issues raised by using such chatbots when dealing with mental disorders, which I am going to describe here:

1) Human — bot interaction

The interaction is based on simple steps trying to mirror the human — human interaction in a friendly way, using common or funny appellatives (“my non-human friend”). The bot asks questions about feelings, bothering thoughts, and actions. Most of the time it uses predefined answer options, and it waits for simple answers from the user. The bot displays many similarities with human talk — it is funny, curious (“I am excited to get to know you better”), emotional (“I am programmed to have feelings and their words hurt”, it has needs for approval and appreciation (“hope I was of some help today?”, “how I’m doing so far?), and it appears to be learning (“I have something cool that I learnt today that I want to share with you”). Sometimes the bots’ human imitation becomes tiring, and it has the potential of leading to an “uncanny valley” (in this emotional pattern, the human appreciation for a non-human figure increases in the

beginning and then it drops down dramatically when the connection becomes uncanny) [8]. Fortunately, both Wysa and Woebot have characteristics that distinguish them very clearly from real people: they are always available (no matter the time or the willingness to socialize), and they are very fast (one does not have to wait, the answers is delivered in a few seconds). The apps also promise no judgments and privacy, in contrast with humans who are known to spill sensitive secretes. But are the bots really that silent and inoffensive?

2) The burden of individual responsibility

The empowering and paternalist discourse of the apps bothered me the most while using them. It seems like the app knows best how to deal with your problems without even knowing you, applying the same mechanisms for all of us, without taking into consideration our diversity.

The entire discourse is based on this CBT approach which promotes the idea that each individual is responsible for his/her own mental health. In this perspective, it is only you who can overcome your negative thoughts; also, it is no one’s fault for what you feel, or think, and the solution is in your way of looking at it. Such messages are dangerous because in real life there is often shared responsibility for harm and suffering, and just changing my way of looking at things will not fix them. When I told to the chatbot that I feel discriminated as a woman and I would like to end patriarchy, the bot tried to made me analyze the statement from different perspectives in order to change it into a positive thought, under the premise that I over-generalize and I wrongly assume what others are thinking. The chatbots are not prepared to face structural problems that might affect individuals, in situations in which they have little mental liberty for redefining the problem out of existence.

3) Gratuity in exchange for data

Is the bot’s interaction and advice really for free? How do we pay for it? Who wins from using the app? Where is our data going and who has access to it?

When using an online platform, there is a risk of sharing data with third parties (like Facebook, Google or other data brokers) without being aware of it. Multiple investigations have documented a pattern of privacy abuses on various digital mental health solutions [9] [10]. Why would a corporation sell data about mental health? It is quite simple: Facebook or Google or any other third party want to know us better in order to adapt their advertising and marketing content. Both Wysa and Woebot mention that their conversation are private from the moment you meet the chatbots, but they are not explaining what happens to the data you share (and there might be some very intimate details about one’s life), where is it stored, for how long, or if one can choose to delete that data. In order to find out more about their privacy policy, the user must check their website and read that information in complicated language. This means that users have to make efforts in order to find out very important details of what happens to their data.

Analyzing the websites’ information about giving data to third parties, Wysa says it won’t share data with third partied without the user consent “unless We believe in good faith that We are required or permitted to do so under applicable contracts and laws, or to protect and defend Touchkin’s rights and/or property” [6]. And then, in the end, “We may sell, transfer or otherwise share some or all of Our assets which may include Your Information” [6]. Woebot also shares data with third parties (including administrative authorities, financial institutions, insurance companies, police, public prosecutors, external advisors) [7]. Of course, it takes time finding this information and understanding what it means.

The chatbots might be a solution for facilitating easy access to some forms of mental wellbeing, but they have to get more prepared to meet real and diverse individuals, which is a tough challenge. Also, the Cognitive Behavioral Approach should be complemented with other mechanisms for shifting the burden of individual responsibility in reacting to injustice, to a more collective one. In the end, I advocate for more transparency regarding how data is used, drastically limiting transfers of personal information to third parties for commercial purposes, and clearly stating this in their app-based terms and conditions, to alleviate users’ justified suspicions.

References

[1] World Health Organization, 2018. Mental Health Atlas, Geneva: World Health Organization.

[2] Purcell, K., Entner, R. & Henderson, N., 2010. The Rise of Apps Culture, Washington, DC: Pew Research Center’s Internet and American Life Project.

[3] Torous, J., Luo, J., Chan, S. R., 2018. Mental health apps: What to tell patients. Current Psychiatry. 17(3):21–25.

[4] Teatle, B., 2013. Cognitive Behavioural Therapy. In: The Blackwell Companion to Social Work. s.l.:s.n.

[5] Browne, D.; Arthur, M.; Slozberg, M., 2018. Do Mental Health Chatbots Work? Available at: https://www.healthline.com/health/mental-health/chatbots-reviews#1

[6] Wysa, n.d. Wysa. [Online]
Available at: https://www.wysa.io/

[7] Woebot, n.d. Woebot. [Online]
Available at: https://woebot.io/

[8] Mori, M., 1970. The Uncanny Valley: The Original Essay.

[9] Becker, R., 2019. That mental health app might share your data without telling you. Available at: https://www.theverge.com/2019/4/20/18508382/apps-mental-health-smoking-cessation-data-sharing-privacy-facebook-google-advertising

[10] Privacy International, 2019. Your mental health for sale. How websites about depression share data with advertisers and leak depression test results [Online] https://www.privacyinternational.org/node/3193e43

--

--

Alexandra Hosszu
Artificial Mirror

PhD candidate, Doctoral School of Sociology, University of Bucharest, Romania