4 Ways AI is tackling Depression

Lorny P.
LatinXinAI
Published in
6 min readJul 23, 2018

A survey of where AI (machine learning, image classification, and virtual assistants) is being deployed to tackle mental health issues.

Maybe some algorithms can help the Zoloft guy…

Back in February, I was at the Hack Mental Health hackathon here in San Francisco. Over 100 people got together during SuperBowl weekend to create solutions for mental health challenges. Proposed solutions ranged from a multitude of chatbot applications to anonymous discussion apps to mood tracking hardware; with Huddle, an online group therapy app, winning 1st place.

Huddle (Remote Group Therapy Application) wins 1st Place

While at the hackathon, I heard tons of stories from engineers, founders, and other investors, where a family member or they personally have suffered from anxiety, depression, bipolar disorder, or PTSD among others. These conversations had me wondering what solutions are currently available, and in particular, where AI* is tackling these issues.

I. Machine Learning + Suicide Prevention

Last year, the broadcast of a 14-year old girl committing suicide on Facebook Live prompted efforts for Facebook and other social media outlets to prevent users from self-harm. Currently, Facebook is using machine learning identify users in emotional distress — but how does it work?

To determine if a user is at risk, the algorithms identify suicide or self-harm related words and phrases in a user’s post or comments from concerned friends. The pattern recognition program then identifies the post and initiates a “report post” button for users to click. The goal is that the data will learn to pick up signals from various points and activate a response to the user at risk or those that can help them.

Facebook Live’s prompt to help a user at risk of self harm

Facebook has been training it’s algorithms on tens of thousands of posts to learn to flag comments reported by friends who are concerned about another friend. For now, the current algorithms are limited to text, however Mark Zuckerberg announced last year that Facebook’s AI Research lab (FAIR) is researching methods to identify worrying photos and videos.

Facebook’s Research on Learning Non-Verbal Interaction Through Observation

Although, Facebook hasn’t published any recent findings on using facial recognition to identify a user at risk, FAIR has shared their approach on learning to predict appropriate responses to a user’s facial expressions through observation; which they claim will serve as a fundamental building block in facial expression research.

II. Machine Learning + Brain Imaging

Researchers at Carnegie Mellon University and the University of Pittsburgh have developed a new approach to identify suicidal individuals by analyzing the alterations in how brain patterns represent certain concepts, such as death, cruelty and trouble.

Carnegie Mellon University

Professors Marcel Just (CMU) and David Brent (Uni. Pittsburgh) applied a machine learning algorithm to capture neural representations between subjects who had suicidal thoughts and subjects who had no suicidal thoughts. The algorithm was applied to six word-concepts (death, cruelty, trouble, carefree, good and praise) that best discriminated between the two groups as the participants thought about each one while in the brain scanner. Based on the brain representations of these six concepts, their program was able to identify (with 91 percent accuracy) whether a participant was from the suicidal group or non-suicidal group.

Participants with suicidal thoughts experienced different emotions when they thought about the test concepts, like death or cruelty. Participants who had suicidal thoughts showed signs of shame and sadness particularly at the mention of death.

The team at CMU is not new to these analyses. Just and his team pioneered the application of machine learning to brain imaging with the first computational model that can predict unique brain activation patterns associated with names for things you can see, hear, feel, taste or smell. Since then, their research has been extended to identify emotions. The team hopes that their findings can be applied to a larger sample and can eventually predict and prevent suicide attempts from taking place.

III. Image Classification + Self-harm Content

Researchers from Arizona State University, Michigan State University and University of Washington recently published their model on understanding and discovering self-harm content prediction. Using a large data set from Flickr, the group used text analysis, owner analysis (behavior patterns like posts-per-day), temporal analysis (time of day when people post), and visual analysis of pictures posted. What I found most interesting was the team’s visual analysis, which used color patterns in pictures to understand the emotion represented in a picture.

Image example from Flickr: Text analysis, owner analysis, temporal analysis & visual analysis

Using variables like hue, saturation, and brightness, the group calculated the average similarities for photos in self-harm and normal content (as presented in the table below). Their findings show that photos in the self-harm content have lower average values in brightness, which tend to express more negative sentiments.

Visual Analysis of Color Patterns in Flickr (self-harm tagged) Images

IV. Chatbots + Cognitive Behavioral Therapy

Chatbots exploded on the scene a few years ago, when there was nearly a chatbot for all of your human needs. For mental health, chatbots like Woebot, Wysa, and X2AI’s Sara help users who are depressed or anxious by re-framing their situations through cognitive behavioral therapy (CBT) techniques. Woebot, founded by Stanford’s Andrew Ng and Allison Darcy, automates CBT by following a sequence of steps to identify and reframe negative thinking.

Feeling sad, lonely, or anxious? Woebot can help you with a lot

Mental health chatbots continue to improve with developments in natural language processing and the application of machine learning and deep learning to conversational agents. For now, these chatbots definitely do not replace proper therapy, but do provide real-time responses that brighten a user’s mood and make them feel not so lonely or alienated. Both Woebot and Sara operate through Facebook Messenger and are improving accuracy in their responses.

It’s great to see that tech is providing solutions to mental health problems. Hopefully we can start to see solutions that can be accessible and affordable to all, especially those who really need it. If you know of any cool research being done in this realm, that I left out, please do add in the comments below!

  • By AI, I am referring to machine learning, image classification, and virtual assistants.

LatinX in AI Coalition

We are happy to feature Latinx in AI researchers, scientists, engineers, entrepreneurs, and writers in our Medium Publication. Thanks to Lauren Pfeifer for submitting this amazing piece to be shared with our network!

Want your work to be featured in our publication? Email us at latinxinai@accel.ai.

Check out our open source website: http://www.latinxinai.org/

Do you identify as latinx and are working in artificial intelligence or know someone who is latinx and is working in artificial intelligence?

Add to our directory: http://bit.ly/LatinXinAI-Directory-Form

If you enjoyed reading this, you can contribute good vibes (and help more people discover this post and our community) by hitting the 👏 below — it means a lot!

--

--

Lorny P.
LatinXinAI

Investor. Engineering student. Latina. Space enthusiast. Obsessed w/ Rocketry, A.I., and all things mechatronics.