Moving Towards Tech and Mental Health

Culture CoLab
CoLab Thinks
Published in
6 min readSep 11, 2018

by Aakriti

Being the stubborn and opinionated person that I am, I’ve held some very black-and-white opinions about science and technology. Thanks to the classic divide between science, commerce, and arts in 10th grade, I did not know that the three were not mutually exclusive and intersect with each other at many points. The divide was deep — there were students who were ‘smart’ enough for the sciences and then the ‘leftovers’ who took the arts. For me, the science people were those exclusively interested in ‘objectivity’ and the non-human aspects of the physical world aimed at generalizing whereas we, the psychology ones, were rooted in humanity and the special context of each person.

Thanks to the classic divide between science, commerce, and arts in 10th grade, I did not know that the three were not mutually exclusive and intersect with each other at many points.

Thanks to the lack of inter-disciplinary approach my school and our education system has taken towards teaching

Part of the reason why I held such beliefs for so long was this non-accessibility of science. There were physics, chemistry, and biology labs that I was no longer allowed to enter and the introduction of a vocabulary that I could not understand. Why were concepts never reachable enough for me? How incomprehensible must have science been to me that when I finally said goodbye to it in 2009, it was forever and it was with joy.

Eight years later, I entered the streets of Daryaganj and interacted with the enormous number of people dealing with mental health issues. I found two problems — the lack of systems that strengthened well-being, and the few treatment options available. My context-driven self, focusing on specialized therapy realized that the problem was too large to be dealt with alone. Psychiatry and counselling were expensive, time-taking, and often clouded with shame and vulnerability because of the in-person interaction.

I found two problems — the lack of systems that strengthened well-being, and the few treatment options available for mental ‘illness’.

That year I also met Woebot, a chatbot on Facebook that helps people with mild mental health issues. This was a time when I too had started battling with anxiety and my pocket could not afford a therapist. The bot was funny, would check on me every single day, and give me links to resources that would be helpful based on my condition. Woebot and I, nevertheless, broke it off in a month or two because of the limited nature of our conversations. These days I use the Daylio app to track my moods, and the Habit app to strengthen certain behaviours.

A candid picture taken of Woebot in its clinic

Writing this blog today, I am struck with the reality that in my three years of under graduation in psychology and two years of post-graduation in mental health, never once did we have a conversation about tech — neither technology as a barrier to wellness or as a potential form of treatment for illness.

Then I joined CoLab and it became my science translator. Everything that had felt mysterious so far was made simple. Boss says what Einstein said –

“if you can’t explain something simply you don’t understand it.”

I learned about artificial intelligence (AI) — machine learning and deep learning. For once, I embraced the possibilities of what AI could do rather than drown myself in the pop culture driven fear of an apocalypse. Naturally, my curiosity led me to wanting to understand the role of AI in mental health and what a new world it was!

AI is an incredible tool for identifying many forms of neurological and mental illnesses. Anyone who is in the field would know how so many issues overlap, and AI is helping exactly with that — a transdiagnostic approach that helps lower the number of steps a user will have to take before getting the right form of treatment.

Many organizations are working on creating more human-like language-processing bots that can work as therapists free of cost or at a subsidized rate. This is not a delusional choice — most creators are aware that a bot can never really fill the shoes of a therapist — but imagine the possibility — of being able to seek therapy anytime, anywhere, and for free. Think of people in remote areas, and think of human-therapists being able to collect crucial data in-between sessions so they can modify treatment plans.

And think about well-being. Daylio helps me track my moods and triggers, the ‘My Calendar’ app has helped me see my moods in light of my menstrual cycle, and the ‘Habit’ app encourages me to do Yoga and practice Urdu every day. While these can never substitute human interaction and the warmth of a hug, they still help me become a better version of me by raising my self-awareness through data.

Source: https://www.slideshare.net/pieroleo/ibm-research-5-in-5-report-for-2017

Why do I start this conversation with you today? Because tech will change your life, whether you like it or not. Awareness will help. IBM will soon use your voice or words to predict mental health concerns — and my first response to that was — what if the prediction itself becomes a trigger? While the good thing here might be that it aims at prevention, points of concern are no less — who has access to my data? Will IBM keep it protected or share it with my insurance company or potential employer? Will this privacy be free or come at a cost? And if there is a cost, what about those who can’t afford it?

You may have also noted that all the links that I have shared today are from outside of India. How accurate will diagnosis be and how helpful will a chatbot be if it was developed in the US for the population there? How will AI catch up with Indian culture when psychology continues to struggle with cultural issues surrounding theories developed in the west. What happens when chatbots are not available in vernacular languages and implicitly focus on an individualistic approach to solving problems for a country that has a collectivistic culture?

Additionally, AI replicates human biases. AI relies on data, and data is us — with all our biases, stereotypes — sexism, ableism, classism, casteism. Data is people who have the loudest voices. Data is what the majority, the influencers, the powerful believe in.

Are these issues separate from my background as a psychology, liberal arts, and social work student? Can I disconnect from them when trolls continue to bully people on social media? And in a technologically evolving world, how long can I pretend to be in my own personal disconnected haven?

Next in series, we will unravel various aspects of using Artificial Intelligence for diagnosis of mental illness.

For more, read and watch –

This AI System Can Diagnose Depression From Instagram Photos

Diagnosing and Treating Depression with AI and Machine Learning

Dark Net — TV Series

Meet Tess: the mental health chatbot that thinks like a therapist

How AI is revolutionizing mental health care

Artificial intelligence will replicate the human biases we don’t acknowledge having

AI replicating same conceptions of gender roles that are being removed in real world

Telling AI not to replicate itself is like telling teenagers just not to have sex

AI picks up gender and racial biases when learning from what humans write

--

--

Culture CoLab
CoLab Thinks

We are a hard-working organization bringing together technology and the arts, business, and social impact.