AI Will Transform Mental Health, but Therapists Don’t Know About It Yet

Anatoliy Kats
3 min readSep 24, 2019

--

Every time I go to one of these workshops (this one co-produced by MIT-IBM Watson AI Lab and UCSD Center for Healthy Aging), I notice a yawning gap between technology being developed to keep us all sane and what I learn in my clinical mental health program.

Therapy (and therapists) are coming into your home

Fitbits and Aura sleep-tracking rings are just the beginning. John Torous’ lab at Harvard/BIDMC is developing an app that can connect to those devices, conduct assessments and deliver behavioral nudges at just the right time. The system produces fine-grained data that researchers can use to zero in on effective treatments. Clinicians would get all that data before seeing a client. It would accelerate client progress and alter the nature of the therapeutic relationship. It will all be deployed before I graduate, yet Lesley’s courses on data literacy are, shall we say, sparse and other programs are no better. Also, this tech will allow bachelor-level clinicians to take on some cases currently considered master-level work. I haven’t heard that discussion yet, but it’ll be heated when it comes.

Humans are losing the monopoly on empathy

People gave names to their Roombas for a long time, but the Personal Robotics Group at MIT is taking it to a new level. They build robots that form relationships with children to help them learn language and emotional intelligence. Other robots keep company to older adults and remind them to take their medication. Research shows that these robots don’t have to be that smart to be effective because humans are all too happy to project feelings on them regardless

Other kinds of digital therapeutics tended to target cognitive therapies because they are less personal. These robots are claiming a place in analytic and humanistic therapies. By my reckoning, they’re coming into people’s homes, schools and mental health agencies in 5 years. They raise all kinds of ethical questions. When (not if) a person gets attached to a robot, can someone else permanently deactivate it? How about changing its software and possibly its personality? For example, if a clinician prescribes a robot to a client, how much control to they have over the robot’s behavior? Who exactly it is that the client has a relationship with in that scenario and what kind is it?

This is not science fiction, this is now. The lab deployed a fully remotely controlled robot that talks to child victims and perpetrators of bullying. Children are more comfortable with it than they are with a human and anti-bullying interventions are more effective. Over time, these kinds of robots will get more and more autonomy and we need to start having a discussion about clinical implications now.

What about social justice?

The conference included an ethics panel. Had I not been there, they would have discussed data privacy the entire time. For sure, of all the ethical questions in our field, the implications on data privacy are the most novel, but they are not the only ones. I asked a different question, citing a case from my Culture, Power and Oppression textbook:

Suppose a black child enters a predominantly white school where he is teased because of his race. He gets sent to a counselor who slaps an oppositional defiance disorder diagnosis on him and tries to “cure” it without considering that the behavior may be justified. What are we doing to avoid a dystopian universe where, say, the pill-reminding robot is used as a powerful instrument of oppression of the unfairly diagnosed?

I got back general answers about a thin line between use and misuse of technology and the need to study systemic issues in greater detail. The mental health tech community appears disconnected from the reality of clinician education today. Social justice is all we talk about in my program, but all of that discussion is reactive. We talk about using modifications of existing methods to remedy decades and centuries-old problems. The technology wave is coming and it will turn the profession on its head. If we act now, we have a chance to be proactive, anticipating problems before they occur.

--

--

Anatoliy Kats

I am an entrepreneurial software engineer who is training to be a psychotherapist at Lesley University in MA