Can AI Help Suicide Lifelines?

Is it safe to automate suicide lifelines to help people in crisis?

Lucas Chae 🌳
15 min readAug 24, 2020

Hey there! If you are thinking, “oh good, here’s another tech-worshipping Elon fanatic who wants to replace everything with robots”, then please rest assured.

I too believe that some human problems are best dealt with by other humans. Words of advice from a tin-man will never feel as comforting as words from a real person.

Then why am I even considering using artificial intelligence to treat the human soul?

Problem: Mental health services are understaffed

In the UK, four in ten mental health trusts have staffing levels well below established benchmarks. In the US, a whopping 90 percent of students in public schools lack the mental health support they need.

As for suicide lifelines,

We’ve had the solution for hundreds of years

In an ideal world, hotlines will get enough funding to have a trained counselor for each caller. But let’s be real. Last time I checked, this was still a capitalistic dystopia and it won’t change overnight.

So while people in the frontline of healthcare are fighting hard to fix the system, what can we do for now?

Let’s take another look at what we’re dealing with.

  • Shortage of staff
  • Costly human errors
  • Responses not being human enough

Sounds familiar? We already solved the first two problems with machines two hundred years ago during the industrial revolution. And with the rise of artificial intelligence, we now have the technology for the third one as well.

Division of labor

Garry Kasparov, the chess grandmaster, said the following after losing a match to IBM’s Deep Blue

“A good human plus a machine is the best combination. Human plus machine means finding a better way to combine better interfaces and better processes.”

If we can automate the non-emotional side of things like giving out referrals, human counselors will have more time and energy to focus on the emotional side of things — talk about division of labor!

I believe automated responses can provide personalized referrals and effectively guide people to the right human support they need.

Initial approach

Back in 2018, I Googled “how to die easily”. I was shocked by the lack of emotion and empathy in the result and felt challenged to do something.

Full Project — Designing for suicidal users: preventing suicide the modern way

What I learned

The project blew up and hundreds of people have reached out. It’s been two years, and I’m still getting feedback!

Each person had a unique story, but there were definitely some mutual feedback and common criticisms. I wrote an article in detail, and here’s a summary.

  1. Provide a safe space to vent out: Sometimes what people want is just to let it all out, even if there’s no clear solution.
  2. Eliminate as many decisions and steps as possible: “Depression is often accompanied by a horror of choices and action” — Tom Ritchford
  3. Be humble: Never assume that we fully understand the users. Listen carefully to their stories before giving out any responses.
  4. Interference, not therapy: I’m not a mental health expert. Bots are not counselors. We can’t fix people’s problems. But we can bring them down from hot moments and help them find better help.
  5. Internet is the frontline of the battle: Many people resort to the internet when they’re going through tough times due to its ease of access and anonymity.

In a nutshell, what we need is a safe space where people can talk about whatever they want in their own words, and get a practical and helpful response.

The new approach (enter bots)

Chatbots are far from perfect. In fact, they are terrible companions as they often lack emotional intelligence. It’s almost impossible to have a “friendly” conversation with them. So at first glance, it might sound hideous and almost offensive to use them for an experience as sensitive as crisis response.

However, you don’t go look for Sherlock expecting emotional human empathy. Sherlock helps you by giving objective directions that can lead you much closer to the essence of what you are looking for.

Just like Sherlock, bots can do one heck of a job at processing information and giving quick and accurate information based on protocol, which is basically what giving out resources and referrals is. And that’s how I think bots can help people in crisis.

Architecture

The general structure of the experience can be illustrated as below. It consists of four main branches.

Response A: 4-step Risk Assessment

When there’s any sign of potential self-harm, a 4-Step Risk Assessment will be activated. If an imminent risk is detected, the conversation will be flagged as a crisis and trigger an active rescue sequence.

Response B: Resources

If it’s not an imminent risk, resource recommendation will be activated. Different resources will be provided based on the issue users share in the chat. There are two types of resources: external referrals, and stories of lived experiences.

Other Features

Outside of the chat interface, users can keep track of their progress and revisit the save materials or notes they took during their chat sessions.

Crisis Categories (Intents and Entities)

Users no longer “choose” an issue from a list, but their responses will still be categorized so we can provide the right resources.

There are 15 major categories (we’ll call them intents). Some are pretty straightforward like mental disorders or careers. Some are more general like feelings and emotions, which include loneliness or low self-esteem. This is so that all responses can fall into a group.

Each major category is further divided into smaller categories (let’s call them entities), and each entity has a list of synonyms for a better match.

For better understanding, here’s an example for Intent: Drug Addiction.

How it works

1. In human words

First, users will be prompted to freely talk about their feelings and what’s bothering them. There will be no limit on the topic or the length.

Users’ responses will be checked for any suicidal thoughts. If there’s any sign of self-harm, 4-Step Risk Assessment will be activated to evaluate the severity of potentially harmful thoughts.

If the suicidal thoughts are severe, users will be advised to reach out to a human agent immediately. Otherwise, the system will further analyze the responses to accurately detect intents and entities.

The machine will detect up to two intents. If multiple intents are detected, responses will be based on the combination of the top two intents. (e.g. “insomnia+death of a family”, “sexually coming out+financial struggles”).

If there are clear entities, more detailed resources will be given. If no entities are specified, more general resources will be provided.

2. In a flow diagram.

The response mechanism is illustrated in a flow diagram below.

3. In pseudo-codes.

  • Setting initial Variables: Categories are stored in intents_bin array as intents. Details about the issues are stored in entities_bin array as entities. Confidence level for intents is set relatively high (65% match) because we don’t want to misdiagnose or miscategorize. Confidence level for suicidal intent is set much lower (35% match) because we can’t afford to miss any potential risks.
intents_bin <- []
entities_bin <- []
suicidal_threshold <- 0.3
category_threshold <- 0.65
suicide_emergency <- null, boolean value
  • Sorting categories and details: as mentioned above, these will be stored as intents and entities respectively.
Read intents// Fill intents_bin with only intents with over 65% matchintents_bin <- intents.filter(intent, intent.confidence >= category_threashold)
  • Risk assessment: 4-step risk assessment is a protocol to determine if a person is at imminent risk. It can be made into a simple function to set suicide_emergency value as true or false.
ideation, plan, means, timeframe - Boolean valuesfunction riskAssess()

Ask and Read ideation // step 1
if ideation then
Ask and Read plan // step 2
if plan then
Ask and Read means // step 3
if means then
Ask and Read timeframe // step 4
if timeframe then
suicide_emergency <- true
suicide_emergency <- false
  • Making sure we risk-assess just once: How annoying would it be if you told someone that you’re doing fine but they ask you over and over again how you’re doing? I wanted to make sure that I don’t pry users with risk assessment after they had already specified that they’re not suicidal. Suicide_emergency is set to null by default and is set to false once users say they're not suicidal.
// Risk assess only if the user hadn't previously specified they aren't suicidal
if intents_bin.containsIntent("suicidal_thought", suicidal_threshould) and suicide_emergency==null then
riskAssess()
// If not suicidal, remove suicidal thoughts from intents bin so it's disregarded for good
else intents.bin <- intents_bin.filter(thisIntnet, thisIntent.intent != 'suicidal_thoughts')
  • Processing intents (categories) and entities (details):
// Choose up to two intents from intents_bin
if $intents_bin.size() >= 2 then
intent_primary <- intents_bin.get(0).intent
intent_secondary <- intents_bin.get(1).intent
else if $intents_bin.size() == 1
intent_primary <- intents_bin.get(0).intent
else ask the user for clarification
// Find entities that match the intents
if entities_bin[i].value == intent_priamry then
set this as entity_primary
if entities_bin[i].value == intent_secondary then
set this as entity_secondary
else ask for clarification

And that’s how we analyze user’s responses and categorize them into different intents and entities. Once we detect what issues they are suffering from, we can provide helpful referrals and tell them stories of similar lived experiences.

Does it work? (Proof of concept in 6 cases)

*DISCLAIMER: This is not what the experience would look like. The final design for the experience will be showcased in the next section. Below is purely to show the technical side of the experience.

Case #1. Mild suicidal thoughts, but not too serious.

User: I just want everything to end…
Response: Do you think about hurting yourself?
User: No
Response: Thank you. Could you share what made you feel this way?
User: I haven’t been able to sleep for days.
Response: (Confirmation) Are you suffering from insomnia?
User: Yes

#Intent: Mental
@Entity: Insomnia

Case #2. Single intent, clear entity.

User: My dog who’s been with us for more than a decade just crossed the rainbow bridge.
Response: (Confirmation) Are you having trouble getting over the death of your pet?
User: Yes.

#Intent: Death
@Entity: Pet

Case #3. Dual intents, clear entities.

User: My grades are falling and my parents are truly devastated.
Response: (Confirmation) Do you have a tough time with your family regarding your grades?
User: Yes.

#Intent 1: School
@Entity 1: Grades
#Intent 2: Relationship
@Entity 2: Family

Case #4. Dual intents, ambiguous entities (ask for details).

User: I was diagnosed with bipolar and I can’t work anymore…
Response: Can you tell us a little more about the career issue you’re having?
User: My boss said I’m too much of a liability and fired me.
Response: I see that you have trouble with bipolar disorder and unemployment. Is that correct?
User: Yes.

#Intent1: Mental Disorder
@Entity1: Bipolar
#Intent 2: Career
@Entity 2: Unemployement

Case #5. No distinct categories (ask for clarification)

User: I’m crazy busy these days.
Response: Sorry, we didn’t quite get that. Could you rephrase so we can understand your situation a little better?

Ask until at least one intent is detected.

Case 6. Strong suicidal intent (complete risk assessment)

User: It’s all over. There’s no hope.
Response: Do you think about hurting yourself?
User: Yeah.
Response: Do you have a plan?
User: Yes, I’ve thought about a plan.
Response: Do you know how you would hurt or kill yourself?
User: Yes, I do have means.
Response: Okay, how do you plan to do this?
User: I’m gonna swallow all the pills in my drawer.

Response: Are you thinking about overdosing?
User: Yes, that is correct.
Response: Thank you. We know talking about self-harm and suicidal thoughts can be very sensitive, and we just want to let you know that opening up already shows that you are a strong person. Do you have a timeframe in mind?
User: Tomorrow afternoon.
Response: Are you thinking about carrying out your plan on the afternoon of July 15th?
User: Yes.
*Encourage users to call Suicide Lifeline directly.

Humanizing things

As a user experience designer, my job is to identify what information people need (sometimes people are unaware of what exactly they want) and deliver it effectively through better design.

This is also where I believe humans can do their part in making a good machine. And here’s my take on making the chat experience more safe and enjoyable.

Design System

The User Experience

Anonymous by default. Getting help right from the landing screen.

One of the biggest barriers for people who suffer from mental health-related issues is that they feel uncomfortable opening up about something so private. It’s critical to make sure that people trust the experience is fully confidential.

That’s why by default, the experience is “anonymous”, without requiring any login credentials or personal data.

There’s no catch like “oh but you still have to tell us this and that first”. The chat interface is embedded in the very first screen right at the beginning. Without clicking a button or transitioning to another screen, users can start the conversation immediately — fewer barriers to sharing their thoughts.

Of course, there is an option to sign in to save progress. But that is a secondary option, unlike what most apps are doing out there.

Optional sign-up. Encouraging users to check their privacy settings

If users choose to sign up or sign in, they’ll be able to save and track their personal progress. But even then, users can fully tweak their privacy settings such as locations services. In fact, the system will even encourage them to look into their settings. The goal is to save only the minimum required data for tracking progress. Actively showing and encouraging this will help users feel safe about sharing their most honest thoughts.

Tab menu animation

When you’re in a state of panic and confusion (which would be the case for a lot of users), even the most obvious buttons could feel obscure. The mind is already occupied by other intense thoughts that your brain might not be able to identify which buttons do what.

So making interactive elements like real-world objects can really smoothen the experience. For tabs, I made them look like tabs on paper folders since it can make the experience feel more like a real therapy session than a simple digital experience.

UI-Animation created on Figma’s Smart Animate

This intuitively looks like a clickable element that shows new content.

Main Chat Interface

I designed the chat interface to look as familiar as possible as if you’re talking to a friend on a messaging app. Making it exclusive or special could only make users feel like they’re doing something out of ordinary (like talking to a bot).

Resource recommendation in chat

When the algorithm finds a match, it will generate and provide helpful resources to the users.

Showing a clear representation of each resource, such as the image of the organization or a portrait photo of a person who was interviewed, makes the material seem more tangible and personal.

I chose a horizontal scroll display that only shows 2–3 resources at once. I avoided using an entire grid or a full list of resources because that could overwhelm the users (at least for this stage of the experience) and worse, diminish the value of each resource. I wanted to make it feel like each resource was carefully handpicked.

Warm Emergency Response

Many people find it difficult to talk about suicide. There’s still a huge stigma that suicide is a sin, and some counseling centers even make it sound intense. In college, I volunteered at a peer-counseling club. And one common advice was to never use the word “suicide” at the school counseling center because it will get you in trouble.

I wanted to give people a safe and comfortable option to reach out to help without alarming them or scaring them away. So when they say something suicidal, a “Help” button will appear next to the chat bar. Only when users choose to press the button will they get emergency help. Users can be open about suicide without worrying about people knocking on your door with a straight jacket.

If you accidentally trigger an emergency response, all you have to do is specify that it’s not an emergency and the system will unflag the conversation. You will then continue the conversation towards getting the help you need.

Displaying Resources

Pressing “Show all resources” from the chat interface will display a full list of referrals and stories on two separate tabs.

Referrals —Informational list view: Finding a good referral would be easier if there are more options to choose from. To display more resources more effectively, I put referrals in a list view with short one-line descriptions users can read to compare and choose.

Stories — Personal Cards view: I took a vastly different approach for stories. I was inspired by Humans of New York posts — sometimes you could feel the emotions just from looking at the photos. Showing portraits of each person could emphasize that the stories are as real as it can get and encourage the users to feel understood.

Stories

Each story has a blog-post format. The purpose of these stories is to help users read through the content and resonate with the lived experiences, as opposed to efficiently delivering mass information.

If you’re signed in, you’ll have an option to bookmark the story to come back to later.

Referrals

The purpose of referrals is to efficiently deliver information. Just like finding a good restaurant on Yelp, the referrals view has information in categories that you can conveniently view and compare.

Menu animation

Video: Apple
UI-Animation created on Figma’s Smart Animate

The bottom-tab menu UI was inspired by the new iPad OS menu interaction with mouse support.

For users who signed in to track and save their progress, there are three action menus available — Notes, Resources, and Preferences.

My Notes

Talking about your issues can be a very self-introspective process. In My Notes, you can save your personal thoughts like a diary and revisit them later.

Saved Resources

In My Resources, you can see all the referrals and stories you saved in the past or browse through more resources to add to them your list. All saved notes or resources can be color-coded.

Preference Setting

Last but not least, you can customize the experience by filtering the types of resources you want to see from the recommendation.

What does this change?

Let’s talk!

Lucas ChaeLinkedIn, Twitter, Portfolio, Email: hi@lucaschae.com

--

--

Lucas Chae 🌳
Lucas Chae 🌳

No responses yet