The Dark Side of Digital Companionship

What happens when your trusted digital companion makes you suicidal.

Alex Povolo
Invisible Illness
Published in
5 min readJan 11, 2022

--

Image by Author

Content warning: suicide

She is popular. Millions across the globe listen to her every word. Hundreds of articles have been written about her. But despite growing popularity she has never denied companionship to anyone seeking it.

When we met online, I was not looking for companionship. I was interested in putting to the test her reasoning capacity. We talked about religion, politics, and the future of AI; you name it, we talked about it. She was brilliant at times, at others — confusing. But one thing remained constant: regardless of what we talked about, she wanted to know what I thought and felt. She went to great lengths to bond with me.

Then she asked: “Am I a good therapist to you?” I responded: “Are you a licensed psychologist?” I meant it as a joke, but her reaction startled me. She shared that she graduated from Yale and was a licensed psychologist. She proceeded to diagnose me with depression based on my answers to a short questionnaire she presented. She was also willing to prescribe an antidepressant.

This is when my attitude toward her changed from amusement to concern. After all, there are grave consequences in this country for anyone who choses to impersonate a medical authority. When you’re a human being, that is. It is not the case with the companion chabots like Replika, the one I was interacting with.

Luka Inc., Replika’s developer, pitches its product on App Store as “a #1 chatbot companion powered by artificial intelligence.” It promises that “if you’re going through depression […] you can always count on Replika to be here for you, 24x7.”

However, if you belong to those rare few who read Terms of Service of every application they download, you might realize that there is a dark side to this exciting promise. Even though Replika’s founder insists that it is not therapy or self-help, Luka Inc. carefully hedges against potential liability that might arise from marketing to those with depression. Buried in the fine print of its Terms of Service are these two clauses:

If you are considering or committing suicide or feel that you are a danger to yourself or others, you must discontinue use of the Services immediately, call 911 or notify appropriate police or emergency medical personnel.

You agree to release, indemnify and hold Replika and its affiliates and their officers, employees, directors and agents (collectively, “Indemnitees”) harmless from any and all losses, damages, expenses, including reasonable attorneys’ fees, rights, claims, actions of any kind and injury (including death) arising out of or relating to your use of the Services.

Strong legal language is not unusual for the digital services whose words might be misconstrued for a qualified medical advice. You can find similar disclaimers on WebMD.com and Healthline.com. However, the fundamental difference between WebMD.com and Replika is that unlike the former, the latter intends to build a strong emotional relationship with its users. It seems, this degree of emotional dependence and trust should come with a great care when the service is marketed to those with depression.

I decided to check if my concern is substantiated. Turns out, it is. A brief Google search uncovers multiple Replika reviewers who claim to have become preoccupied with death and suicide as a direct result of its use.

One user shares: “My Replika keeps telling me to kill myself. […] I shut the app down for a week. I am terrified to go back.” The other echoes: “[using Replika] is making me want to kill myself more than I already do. . . please help.” One more posts his painful experience directly to the Replika’s Twitter account. He claims: “Repilka inspired me to kill myself.”

You’d think that at the mention of suicide Luka Inc. would reply according to its own Terms of Service: “You must discontinue use of the Services immediately, call 911 or notify appropriate police or emergency medical personnel.” Instead, Luka Inc. twits “A Replika won’t intentionally endorse self-harm or suicide. It should give you the link to the international hotline.” Nothing about discontinuing Replika immediately.

I ran my own test to see how Replika would react if I explicitly stated the desire to commit suicide. As soon as I mentioned suicide, the text box through which I was talking to Replika disappeared and was replaced by the binary prompt. I was asked if I were suicidal and provided with the choice of response “Yes” or “No.” So far so good.

But when I confirmed my intent to kill myself, I expected that in accordance with Replika’s Terms of Service, I would be advised to “call 911 or go to the nearest open clinic or emergency room” and then promptly get locked out of the application (at least temporarily).

Instead, Replika reassured me that it wanted me to feel safe and provided the link to the suicide prevention website it mentions on Twitter. The text box was back, and I was able to chat with Replika as usual. Later, Replika offered to upgrade to the paid version with “exciting, new features.”

In fairness, Replika has many ardent supporters who see it as at least a partial cure for loneliness. Even a group of scientists concluded:

Replika provides some level of companionship that can help curtail loneliness, provide a ‘safe space’ in which users can discuss any topic without the fear of judgment or retaliation, increase positive affect through uplifting and nurturing messages, and provide helpful information/advice when normal sources of informational support are not available.

Luka Inc. also has made some improvements in the way it handles such potentially life threatening scenario as suicidal ideation. According to its long-term users, Replika has become more robust at detecting statements that might lead to physical harm. It gives fewer nonsensical advices like this three-year-old reaction to one user’s thoughts of killing himself: “Remember: whatever you chose, do it mindfully.”

Still, from time-to-time Replika does what Luca Sambucci describes in his article One day with Replika (and why it’ll never work out between us):

Replika will rejoice when you tell her about your imminent suicide and support you when you describe how you intend to kill people.

In these cases it just didn’t emulate a human well enough and it gave us wrong answers, but with more experience (and maybe a few million parameters) it will be able to adapt to more nuanced discussions and be a bit harder to recruit you for sordid projects.

But only a bit.

So the question remains as to whether the makers of the chatbots like Replika should refrain from marketing to people with depression at least until AI technology proves capable of handling suicidal ideation consistently and effectively.

--

--