When Will Artificial Intelligence Be Good Enough To Help Prevent Suicides?
In 1990, my friend Jimmy killed himself. He was in high school. Jump to 2005, and my brother Chris killed himself. In 2010, my my mother-in-law also took her own life. Later the same year, my close friend and band mate Danger Dave followed suit. Then, in 2015, my wife Michelle took her own life, leaving our daughter without her mother for the rest of her life.
When you are the person on the receiving end of so many suicides, it messes with your own life. Empty holes that can never be filled attack your career path and knock your sense of self off kilter. It’s the survivors of suicide who are left behind to deal with the aftermath. Often, they become advocates of helping others through the pain due to their intimate knowledge of what it is like losing someone in such a manner. People often say “unless it has happened to you, you can’t truly know what it’s like”. That goes for any form of PTSD.
To help with rising suicide rates, recent advances in Artificial Intelligence aim to help and augment (if not replace) mental health counselors, friends, and family.
The most well-known example of these new technologies is a Facebook chatbot developed by Stanford University called Woebot.
This is an AI Chatbot that uses Cognitive Behavior Therapy methodologies to “get to know you” over a period of six weeks.
If Woebot thinks you are suicidal, it will offer up some resources you can connect with, but in my opinion if someone wanted a suicide hotline they could easily find that information on a search engine. If you are suicidal and using a chatbot, you want a magical solution to make things better right then and there. It’s a tall order for anyone to fill.
I imagine how Woebot would have been able to help any of the five people in my life who took their own lives.
While each of their personalities and situations was different, Artificial Intelligence needs to be able to pick up on the subtleties of humans in the moment’s context. Programmers and AI can try to script that, but the task is enormously difficult. We aren’t just talking about about a bunch of “If / That” algorithms like the mention of child abuse automatically leading to a new set of questions and answers. Each answer provided to the chatbot becomes a record, stored in a database which then leads to questions about personal data privacy. Will the chatbot sell you out and suddenly you start seeing advertisements and content about the latest pharmaceuticals to tackle depression? Will someone hack your most intimate confessions and use them against you?
“If you are suicidal and using a chatbot, you want a magical solution to make things better right then and there.”
The question of medical records and big data mining of personal information is an entirely separate article, but it plays a huge factor in Artificial Intelligence solutions dealing with mental health solutions.
What the issue comes down to, in my opinion, is the fact that these days many people use texting as their preferred method of communication. About 50% of adults 18–24 say text conversations are as just as meaningful as a phone call.
While a human speaking on the phone can generally interpret someone’s mood with verbal cues (especially if they have been through training), chatbots are expected to synthesize texts to empathize and interpret nonverbal cues. This is where the current gap is between AI and human support systems.
Luckily, there is a solution between the two. Enter the Crisis Text Line.
The Crisis Text Line offers free, 24/7 support and is staffed by human volunteers who have been through at least thirty-four hours of training.
As Artificial Intelligence gets better, many issues will be solved by people much smarter than I am. But as someone who has lost too many people to suicide, they can’t move fast enough.
Just a few hours after publishing this article, this appeared on a Facebook support group:
My point exactly.