AI in Counseling & Spiritual Care

Predictive models are transforming mental health. They might also monitor and shape our moral lives.

by J. Nathan Matias, Lydia Manikonda, Scott Hale, and Kenneth Arnold

This post is the fifth in a series of short introductions to artificial intelligence designed for group discussion in non-technical Christian settings. To follow the series, sign up for our email list, hosted by the Oxford Pastorate.

He heals the brokenhearted and binds up their wounds. He determines the number of the stars; he gives to all of them their names. Great is our Lord, and abundant in power; his understanding is beyond measure. Psalm 147: 3–4

In Psalm 147, the work of God to heal the brokenhearted is placed next to His mathematical and astronomical wonders. Experimental technologies are already applying those mathematical wonders to God’s work of healing in the world. AI is transforming care for people’s mental and spiritual well-being through (a) large-scale data collection about people’s intimate thoughts and actions, (b) technologies designed to interpret people’s emotional and spiritual conditions, and (c) systems that coordinate counseling and emergency response.

In the 1950s, the Anglican priest Chad Varah applied telephone technology to mental health to create The Samaritans, the world’s first crisis hotline. AI systems are now taking on parts of this work.

On social media and in health care systems, machine learning models are being developed to detect mental health risks, including suicide and depression, and to coordinate care. In March 2017, the social network Facebook introduced an AI system that monitors its users to assess each person’s risk of suicide from what we write, what we say, or how our friends respond. When the AI ranks the risk high enough, it alerts the person’s friends and encourages the person to get help or talk to friends. We do not yet have evidence that these systems save lives. AI is also being tested by hospitals in the United States to predict a person’s risk of suicide as far as a year in advance, based on a person’s prior health record. The predictions are most accurate 7 days before a person dies of suicide.

Text-based crisis hotlines have prototyped systems that infer a person’s needs and route them to the most relevant counseling service. AI is also transforming care itself, reducing the costs of mental health counseling by intelligently assembling personalised therapies such as Cognitive Behavioral Therapy from the activity of online volunteers and automated systems.

In the next few years, we can expect predictive models to become a basic part of mental health services. In the longer term, machine learning systems may become trusted to coordinate and deliver ongoing care, expanding the reach of individual counselors and broadening the role for moderately-trained peer supporters. As new projects attempt to measure spiritual well-being by tracking people’s mobile phones and communications, we may see parallel efforts to monitor and intervene on a wide range of spiritual and moral issues.

  • What theologies of privacy and accountability could guide an era where our intimate spiritual, emotional, and prayer lives are observed and judged by automated systems?
  • What might Christian teaching on personal transformation offer a society that increasingly relates to each other based on predictive modeling?
  • Just as Christians have pioneered crisis telephone lines, how can Christian ministries use AI systems to heal the brokenhearted?


Cellan-Jones, L. K., Dave Lee, Rory. (2017, March 1). Facebook artificial intelligence spots suicidal users. BBC News. Retrieved from

Cep, C. N. (2014, August 5). Big Data for the Spirit. The New Yorker. Retrieved from

De Choudhury, M., Counts, S., Horvitz, E. J., & Hoff, A. (2014). Characterizing and Predicting Postpartum Depression from Shared Facebook Data. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 626–638). New York, NY, USA: ACM.

Dinakar, K., Chen, J., Lieberman, H., Picard, R., & Filbin, R. (2015). Mixed-initiative real-time topic modeling & visualization for crisis counseling. In Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 417–426). ACM. Retrieved from

Molteni, M. (2017, March 17). Facebook’s AI Is Learning to Predict and Prevent Suicide. WIRED. Retrieved from

Morris, R. R., Schueller, S. M., & Picard, R. W. (2015). Efficacy of a Web-Based, Crowdsourced Peer-To-Peer Cognitive Reappraisal Platform for Depression: Randomized Controlled Trial. Journal of Medical Internet Research, 17(3), e72.

Picard, R. W. (1997). Affective computing (Vol. 252). MIT press Cambridge. Retrieved from

Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2017). Predicting Risk of Suicide Attempts Over Time Through Machine Learning. Clinical Psychological Science, 5(3), 457–469.




Artifcial Intelligence in Christian Thought & Practice

Recommended from Medium

Riafy becomes a global winner of Google’s Business Messages contest

COVID-19 : Choosing Between Privacy And Public Health

5 Articles on Conversational AI I read in September

What Does Machine Consciousness Mean for Christianity?

Now Overwatch AI will build power rankings to decide if you are a better player or not

Natural Language Processing- Part 1

Let’s code a Neural Network in plain NumPy

Beefy x Upbots $500 BIFI Giveaway and AMA

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
J. Nathan Matias

J. Nathan Matias

Citizen social science to improve digital life & hold tech accountable. Assistant Prof, Cornell. Prev: Princeton, MIT. Guatemalan-American

More from Medium

Why I Am An Atheist

Spiralism | Contributions to Design & The Black Speculative Arts Movement

Why I Started A One Man Record Label.

A Brief Study on Informal and Formal Deviance