Moving Beyond Chatbots In Mental Health: A Technologist’s Viewpoint

Pouria Mojabi
Supportiv
Published in
12 min readNov 1, 2023
a picture of a broken robot and a human man looking toward the light coming from a separate path — article title-moving-beyond-chatbots-in-mental-health-a-technologists-viewpoint

Teaching a machine to communicate like a human has long captivated me. The endeavor is not just a technological leap; it’s a bridge to new realms of human experience, echoing the pioneers’ dreams within digital landscapes.

I’m transported back to my formative tinkering with the Commodore 64 I had as a kid (thanks, Dad!). The surge of excitement is palpable even now as I write this. The terminal, the keyboard, the typing, the commands, and the computer actually responding back were out of this world. Being able to design a bare minimum game or program that could run on the machine was so exciting that I would happily skip sleeping at night.

Today, the frontier is AI Chatbots, a fusion of science and engineering with a transformative agenda. At their best, they’re not just programmed entities but vessels carrying the promise of unprecedented social impact. However, we’ve been distracted from this promise by short-term, simplistic solutions — especially in the realm of mental health.

Understanding The Quirks: Why Mental Health Chatbots Fall Short

In early February this year, Shira Ovide from the Washington Post’s team wrote an article I had been anticipating for years: “We keep trying to make AI therapists. It’s not working.”

“For at least 60 years, technologists have hunted for a mental health holy grail: a computer that listens to our problems and helps us.”

Woebot, Wysa, Youper, and Koko have collectively raised over $110 million in venture capital. Despite significant funding from prominent figures like Andrew Ng, a global leader in AI, these endeavors have yet to succeed in solving the mental health crisis.

In the months leading to co-founding Supportiv, while I was immersed in building, testing, researching, and gathering feedback, I pondered whether machine intelligence could be the solution to our challenges. Is a “computer shrink” as the article puts it, the holy grail?

My position, both in the past and presently, stands firm: AI undeniably offers game-changing potential for mental health. But shifting away from the human core — like relying solely on chatbots — won’t provide the complete solution.

I initially drafted this post in March, but set it aside; running a company is indeed demanding. But then the collapse of the mighty Babylon made what I was trying to convey very real.

This tweet from Hugh Harvey was the ping I needed to finish my draft! Babylon’s Chatbot “AI” was nothing but “a bunch of Excel spreadsheets with some decision trees written by junior doctors.” I was not surprised. Just glad that, finally, someone had pointed this out.

Tech like Babylon’s chatbot is not an AI in the scalable and generative sense we tend to ascribe to the term — just a bunch of rules jumbled together: a good old decision tree! From a purist perspective, rule-based systems aren’t “true AI.” And that’s evident in the user experience.

If you have tried any of the mental health chatbots on the market, one thing quickly becomes apparent: They home in on specific keywords. If you include too much detail or nuance, they get lost in the word soup. This is frustrating when it becomes clear you’re chatting with a set of code.

Allison Darcy’s article on Linkedin further corroborates this observation, using Woebot as an example. Woebot is also primarily rule-based. Which, again, isn’t really artificial intelligence. Darcy has a great point on why this is the case: the healthcare industry is not so welcoming toward generative AI technology.

After all, even in rule-based systems, where AI’s potential responses are gated by human logic, things can still go severely wrong. And let’s not forget: “Man Dies by Suicide After Talking with AI Chatbot.”

Is ChatGPT a better solution than other chatbots?

Thanks to OpenAI, the oversold, limited “AI” days are ending. Even though it’s even riskier from a healthcare perspective, we are now in a new world shaped by ChatGPT (and all of its new-every-week clones). How will general machine intelligence as advanced as ChatGPT impact the mental health space, now and in the future?

My answer is that ChatGPT is changing things, but its tech will change MORE outside of the chatbot concept — by enhancing humans’ interactions with each other. Advanced machine intelligence will pivotally advance mental health care, but not through direct interaction with consumers.

Why Chatbots? Desperation.

In the realm of mental health, chatbots have emerged as a response to a dire shortage of providers, prohibitive costs, and pretty much everything else failing us. It is a bit of a societal heartbreak. Chatbots have stepped in to fill short-term gaps left by traditional systems. They’ve become a digital, yet fake, shoulder to cry on.

1. Prohibitive care costs and provider shortages

According to the National Council for Behavioral Health, the U.S. is projected to have a shortage of over 250,000 mental health professionals by 2025. And how does that impact real people who might need to seek care? 160 million Americans are left in regions with mental health provider shortages. This access gap is a large part of why chatbots may seem to be better than nothing.

Coupled with these disturbing statistics, a study by the Substance Abuse and Mental Health Services Administration (SAMHSA) revealed that 48.5% of adults with unmet mental health needs cited cost as a primary barrier. Cost was “the most commonly cited reason” that people, regardless of race, failed to access mental health care.

Low operating costs make chatbots an even more alluring quick fix to the untenable care access gap–even though 79% of Americans state they would never want to receive mental health care from a bot (Pew Research Center). And some 28% of people surveyed go so far as to say that chatbots should not be *allowed* in mental health.

2. Traditional social supports are dwindling

Family estrangement is thought to be on the rise. Remote work replaces organic in-person interactions with more stilted digital exchanges. The decline of “third places” makes it harder to meet new people. And, social frameworks for togetherness and belonging, like religion and church attendance, are vanishing from American daily life.

For years, friendship in America has declined, a trend that accelerated during the pandemic. Three decades ago, 3 percent of Americans told Gallup pollsters they had no close friends.

In 2021, surveys showed that over 40% of Americans have no close friends, 22% have made no new friends in the past 5 years, 47% have lost more than 1 friend in the past year. In light of those statistics, it’s no wonder that 49% said they feel unsatisfied with their social circle size — and that three quarters of Americans feel lonely (regardless of how many friends they have).

And even for those with good social supports, many lament that in our age of stress and unbearable workloads, that it is simply harder to be there for each other than in the past.

Given the average American’s desperate need for friendly interaction, chatbots could be seen as a solution (albeit, to me, a dystopian one).

3. Reactions and judgments have escalated in our society

Many of us have kept our thoughts and struggles to ourselves, in fear of friends’ or family’s reactions. Sometimes that fear even extends to our therapists. In an era when opinions may become hyper-polarized by news and social media, we may rightfully shy from sharing what we’re going through.

So, the proposed solution becomes: “If humans are going to judge, maybe a chatbot won’t.” That’s the hope. However, even AI can make harmful judgments and “perpetuate and precipitate bias.” Another piece of misplaced optimism toward chatbots.

4. The hassle of scheduling appointments

What if you have a panic attack at midnight when there is nobody available?! Struggles don’t happen 9–5. And if they do, they don’t notify you in advance!

If you can get an immediate appointment? Great. But the average wait time for clinical care in the United States is “about six weeks” according to the National Council of Mental Wellbeing. In the waiting period, minor struggles can escalate, trauma can compound, and negative beliefs surrounding one’s difficulties can cement themselves.

You don’t have to wait for a chatbot. But a chatbot can’t provide human connection.

5. Therapy doesn’t appeal to everyone

Not every struggle requires clinical care, even if you’re ok with the concept of therapy. However, therapy can also be explicitly unappealing to many groups of people — who still need some form of attention for their mental health.

This dilemma leads decision-makers to look for therapy-alternatives that may capture subclinical needs or appeal to disenfranchised segments of the population.

Unfortunately, investors have largely failed to center time-tested modalities like peer support, which served entire communities around the world prior to the establishment of clinical care systems.

Instead, we have re-invented a broken wheel with chatbots.

As I near a decade of contributing to the mental health sector, I have aspirations and insights on how the world might transform and adapt better in this AI-driven era, leading to healthier individuals, families, and communities — using AI to enhance, rather than replace, our human capabilities.

In short: We turned to chatbots in MH because our social networks and care systems are failing. Introducing a more meaningful fix: Supportiv.

Supportiv Has Redefined Peer-to-Peer Support

Supportiv is the fastest on-demand peer-to-peer mental, emotional, and social support service. We connect users to compassionate, human-centered support in less than 30 seconds. Available 24/7. Supportiv is anonymous with deep attention to user privacy, data and ethics.

Emerging in 2018, Supportiv broke through the noise, not just as another tech-enabled platform but as a visionary pioneer. From its inception, Supportiv was leaps and bounds ahead of its time.

Fast forward to 2023, in a world transformed by conversational AI like ChatGPT, the simplicity of digital interaction has become astonishingly apparent. People are left wondering why digital platforms didn’t embrace minimalism sooner. The answer? Supportiv did — and we did it with empathy. Five years ago, we dismantled the complex protocols that barricaded individuals from seeking help.

We simplified the journey to emotional support, reducing it to a single, powerful question: “What’s your struggle?”

The Birth Of Supportiv’s Moderator Ecosystem

While the rest of the mental health industry was busy playing musical chairs, scrambling over the same limited pool of licensed therapists, we at Supportiv decided to skip the game altogether. We didn’t get bogged down by geographical lines or licensing logjams. At Supportiv, we invented a new concept: the game-changing team of Supportiv Moderators.

In collaboration with our esteemed clinical advisor, Dr. Alejandro Martinez of Stanford (PhD, Clinical Psychology), we engaged psychology students — both graduate and undergraduate — initially from Stanford and UC Berkeley and later on from across the globe. We built from the ground up a robust Moderator Training Academy, an educational forge where knowledge, empathy, and technology meld.

Today, this once-experimental initiative has blossomed into a colossal operation spanning 25 diverse countries. Aspiring moderators immerse themselves in an intensive three-month curriculum, facing a rigorous succession of training sessions, tests, and practical scenarios. They emerge not just as moderators but as guardians of safe, empowering, and affirmative interaction.

Moderators empower our users to go from venting to actually feeling better. They wield the art of the ‘conversational arc,’ guiding dialogues from venting to empowering users with self-help healing techniques.

Let’s spill a secret sauce behind our tech: our moderators aren’t just conversation wizards; they’re gold mines for training our AI. While everyone else was just scratching the surface, we at Supportiv went leagues ahead. We turned our platform into a powerhouse feedback loop, with users and moderators chiseling away at our AI’s learning curve. Sound familiar? Yeah, OpenAI took the hint with their ChatGPT strategy. Now, when it comes to grasping the human touch — feeling the emotional, mental, and even physical tremors — we’re ahead. Our AI doesn’t just ‘respond’; it understands, resonates, and reaches out.

If you would like to be considered for a paid moderator role position, apply here.

The Human-Centric AI Of Supportiv

Supportiv puts AI behind the scenes and humans at the center. Here, AI isn’t the star of the show but the stagehand, working tirelessly behind the curtains to amplify the human connection at the center of our platform.

A Data Universe Unlike Any Other

Supportiv’s AI training dataset includes hundreds of millions of conversations on our platform — with moderators’ supervision to remove trolls, racism, sexism, or bias. Coupled with the fact that these conversations are focused on daily life struggles, including mental and physical health, Supportiv’s data is unlike anything else on the market. A gold mine and a big force behind our superior AI.

Patented AI: Similarity Matching

Patent: Mojabi, P. & Plater-Zyberk, H.

Recommending online communication groups by matching unstructured text input to conversations.

Our cutting-edge system isn’t about throwing users into any static old group chat. It’s about precision, being dynamic and placing individuals where conversations resonate with their experiences. It is also about concurrency and being data-driven. Unlike asynchronous reddit/forum style solutions out there, Supportiv connects you to others going through a similar struggle in real-time. Peer groups are small, maximum 5 users. Supportiv was/is the first platform that introduced the concept of micro community peer support groups.

Patented AI: Resource Matching

Patent: Plater-Zyberk, H. & Mojabi, P.

Resource recommendations in online chat conversation based on sequences of text.

Our AI sifts through chat conversations, offering resources — from articles and podcasts to interactive exercises and inspiring quotes — that are genuinely relevant, building bridges to understanding and solace.

The world is catching up to this technology and algorithm, and here we are with millions of these AI recommended resources under our belt.

Real-Time Emotion Detection

Our real-time emotion detection is fine-tuned to recognize 114 distinct emotions, a feat made possible by countless hours of supervised learning from actual human interactions.

Crisis Detection

Where others falter, we excel. Supportiv’s AI is meticulously crafted to identify signs of crisis, distinguishing between passive and active suicidality with an acute sensitivity that’s rare. We know how to care when someone is in crisis.

Historically, mental health chatbots have tripped up, notorious for their redirection tactics or fumbling responses in critical situations. They’ve been criticized for failing to act when users report abuse.

More about our tech and data here: https://www.supportiv.com/data-tech

Supportiv’s Unique Approach to Measuring Outcomes

In the realm of digital mental health, Supportiv has charted its own course for over half a decade, distinct in our refusal to measure success solely based on changes in PHQ-9 depression assessment scores.

Why? Because the PHQ-9, while widely used, is not the be-all and end-all. As Helena, my co-founder, called attention to on Linkedin: it was initially created with pharmaceutical industry interests in mind; not the interests of people who are struggling.

Here’s why we disagree with using this common outcome measurement:

  1. Lack of Holistic Insight. The PHQ-9 learns nothing about what you are going through or what you are dealing with. Focusing strictly on symptoms, the PHQ-9 overlooks the nuanced factors of an individual’s circumstance, ignoring the underlying causes and context of their emotional and physical responses
  2. Inherent Flaws and Controversial Origins: The PHQ-9 has faced criticism over its development, viewed by some as a tool influenced by pharmaceutical interests to broaden the scope of diagnosable depression, potentially leading to overmedication
  3. A Symbol, Not a Solution: Too often, entities use PHQ-9 assessments to signify the appearance of intervention, rather than engaging in meaningful, solution-driven support.

And the list could go on.

The limitations of traditional screening tools like the PHQ-9 are an open secret, with investigative pieces such as those by Olivia Goldhill at STATNews shedding light on these critical issues

So, how does Supportiv gauge progress?

We turn to language and sentiment analysis. We allow people to speak in their own naturalistic language, while seeking support–and we gauge their progress without putting the burden of assessment on them.

By examining shifts in how individuals articulate their struggles, we capture the subtleties of their journey. This is a science, and this isn’t a solo endeavor. We collaborate with third-party research partners like Stanford, University of Washington, and Google to validate our findings.

Interested in the specifics? We use emotion and mood graphs to map the trajectory of healing and growth through emotions like optimism, loneliness, sadness, and more. We create a window into the human experience, offering clarity beyond conventional metrics.

Reach out to us for a deeper dive into how words weave the narrative of progress. Read more about our tech and data.

–Pouria

--

--