Artificial Intelligence in Christian Thought and Practice

This series by Christian computer scientists introduces questions for Christians about AI and its role in society

by J. Nathan Matias (Princeton, MIT), Lydia Manikonda (Arizona State University), Scott Hale (Oxford), Kenneth Arnold (Harvard)

In 1951, Marvin Minsky asked an imaginary mouse to navigate an imaginary maze. Together with Dean Edmonds, Marvin carefully connected three hundred vacuum tubes together with an assembly of motors and light bulbs, applying ideas about the wiring of neurons in human and animal brains.

Minsky and Edmonds watched the virtual mouse’s progress on a bank of lights and offered rewards when it moved toward its goal. Through repeated tries, the mouse learned to escape the maze. When researchers coined the term “artificial intelligence” (AI) five years later, they hoped to prove in one summer that every feature of learning and intelligence could be conducted by a machine. Over sixty years later, their question remains unanswered, even as AI has become a common presence in our everyday lives and our visions of the future.

Artificial intelligence is becoming a basic fact of our lives, an invention like money or democracy that poses complex, enduring questions for practical life, the common good, and our basic ideas of personhood. Just as Christians seek wisdom and offer leadership on other basic issues, we also need ways to understand AI. As Christian computer scientists who study the social impacts of technology, we do not pretend to have answers. We offer this overview of issues raised by AI in the hope that Christians will be inspired to find new ways to follow Christ faithfully and serve the common good with artificial intelligence.

Artificial intelligence is becoming a basic fact of our lives, an invention that poses complex, enduring questions for practical life, personhood, and the common good

Over the next month, we plan to publish a series of eight short posts to help Christians think about AI. In this post, we introduce basic terms, talk about near-term and long-term futures, and suggest parts of the global Church that are well-positioned to lead thinking on AI. We also offer guidance on thinking critically through news stories about AI.

In the weeks that follow, we’ll post further summaries of areas where AI is already transforming our world and posing important questions for Christian thought and practice. We have designed these posts to be printed for use in church discussions and home groups by people without special technical knowledge. Each post includes a bibliography with links to further resources.

To stay notified of new posts or upcoming events in the UK, you can follow this publication on Medium or sign up for our email list, generously hosted by the Oxford Pastorate.

Understanding Artificial Intelligence

The development of AI sixty years ago was a powerful application of the creative, curious impulse that humans have been given by our creator. While the invention is recent, the idea has a long history. In Homer’s Iliad (XVIII), the smith-god Hephaestus crafts intelligent, person-like assistants out of gold and intelligent tables that move where they are needed. To this day, our understanding and goals for artificial intelligence remain shaped by the stories we tell as much as the scientific possibilities. To think wisely about AI, we need to anchor our vision of the present and our speculations about the future in those scientific possibilities.

We often encounter artificial intelligence in one of two ways: as a person or as infrastructure. This division appeared in the Iliad, with the intelligent tables providing infrastructure and the intelligent assistants relating to Hephaestus as persons.

To think clearly about AI, we need to think clearly about what we mean by intelligence

Today, we encounter artificial persons when we ask for directions or make purchases on our phones. Artificial persons are also used to find and arrest people for their openness to child sexual exploitation. Yet AI is mostly used for infrastructure: cataloguing search results, deciding which friends’ photos to show on social media, predicting suicides before they happen, translating between languages, and displaying advertisements. Many uses of AI as infrastructure are so common that we no longer see them as AI at all. As Marvin Minsky wrote in 1965, “If one thoroughly understands a machine or a program, he finds no urge to attribute ‘volition’ to it.”

To think clearly about AI, we need to think clearly about what we mean by intelligence. Across the history of artificial intelligence, computer scientists and psychologists have attempted to create four kinds of systems: machines that (a) think like humans, (b) act like humans, (c) think rationally, and (d) act rationally. Yet even these broad ideas of machine intelligence can leave something missing, as the Christian computer scientist Rosalind Picard realised in the 1990s. Noticing that the men who created the field of AI focused primarily on logic and rationality, Picard developed affective computing, a branch of artificial intelligence focused on detecting, interpreting, and responding to human emotions.

By reusing everyday language as technical terms, some people make AI systems sound more capable

Optimism and Fear in Media Attention Toward Artificial Intelligence

Creators of AI systems often have grand aspirations, and some make grand claims. When an AI outperforms humans at a task like driving or playing a board game, many assume it will achieve similar wonders on other tasks. When you hear news about a new AI advancement, ask whether the system is trained to do one task very well or something more complex. Most AI systems are designed to do one thing, such as drive a car, diagnose an illness, make predictions based on the past, or recognise faces. Yet general AI remains an long-term area of research.

By reusing everyday language as technical terms, some researchers make AI systems sound more capable than they are. For example, when someone says that their AI can do “planning” or “reasoning,” their definition can be much narrower than common meanings. As a result, media coverage of AI tends to occur in cycles that swing between dramatic optimism, fear, or both.

Relating to Machines that Learn

Artificial intelligence systems are usually different from other computer software in their ability to learn. A machine learning system observes information about the past and makes judgments or predictions when it is shown new information. For example, when an online marketplace suggests products that you might like to purchase, the system may have learned from your past behavior together with other people’s shopping patterns.

Just as Christians are called to weigh the things we think and say, we must also be aware of our influence on AI systems

Because AI systems learn from what they see in the world, they are likely to continue those patterns rather than adapt to changing conditions or values. For example, machine learning software that advises court sentencing in the US has been shown to reproduce unfair patterns of racial discrimination. The system was trained on data from biased humans. Likewise, AIs that specialize in language sometimes tend to promote stereotypes demeaning women, since they have learned from a long history of texts in many languages that devalue women’s lives.

Just as Christians are called to weigh the things we think and say (Phil 4:8–9), we must also be aware of our influence on AI systems, machines that echo and reinforce our personal and societal behaviors. Just as parenting includes moral responsibilities for the behavior of our children, we may need to take similar responsibilities toward AIs. Teaching AI systems can sometimes be a moral struggle between societal groups; in one recent case, malicious people coordinated their actions to train a popular AI system to spread racist and sexist insults.

Machine learning often relies on detailed data from people’s lives. As Christians think about the morality of AI, we need to reflect on the surveillance that allows machines to learn.

Finally, systems that learn are built with the mathematics of probability, which predict what is likely to happen based on what has happened in the past. For Christians to develop leadership in AI, we need ways to talk about the morality and ethics of these statistical patterns, even as we remind our world of the possibility of miracles and spiritual transformation.

Christian Leadership in Artificial Intelligence

How might Christians shape the future of artificial intelligence in the 21st century? All technology is shaped by people’s behavior. This relationship is amplified in AI, which is designed to learn from humans. Because Christians of all kinds already encounter AI daily, we all have a responsibility to think through its role in our lives.

Marginalized Christians in all societies will have the greatest first-hand experience with AIs that exercise strong surveillance and control

As Christianity shifts to the global south in the 21st century, those Christians will be well positioned to develop important insights on artificial intelligence. Within the global south, the most privacy-invasive AI systems will be deployed more widely in middle-income societies with powerful governments and weak civil society. AI systems in those countries might make profound contributions to human flourishing. Yet marginalized Christians in all societies will also have the greatest first-hand experience with AIs that exercise strong surveillance and control of financial systems, communications networks, and the justice system.

Christians working in computer science and technology in the US, UK, China, and India are building the future of AI. Many have never studied ethics or the social sciences and can be surprised at the moral and societal outcomes of their inventions. For many Christian computer scientists, the church likely provides their primary education on ethics and morality, although Christian conversations in the computing industry are rare in most places.

The future of AI will also be decided in local and national governments around the world by civil servants, activists, journalists, and policymakers who develop the ways we govern AI. Ministries that serve these groups are well-positioned to support them to make sense of AI.

About the Authors

Dr. J. Nathan Matias, postdoctoral research associate at Princeton University, does public-interest research for a fairer, safer, more understanding Internet. Nathan recently completed a PhD dissertation on governing human and machine behavior at the MIT Media Lab, where hewas also a fellow at Harvard’s Berkman Klein Center for Internet and Society. Nathan, a Guatemalan-American, organises Christian computer scientists to support each other’s faith and advises NGOs on technology, data, and social change.

Lydia Manikonda is a doctoral student at Arizona State University (ASU), Tempe, Arizona. She builds technologies that improve interactions between humans and automated systems. Her research focuses on modeling patterns of people’s behavior on social media platforms, including cooperation, mental health, and emotions. Lydia serves ASU students as president of the Graduate Christian Fellowship and Vice President of the International Students Club.

Dr. Scott A. Hale is a Senior Data Scientist at the Oxford Internet Institute of the University of Oxford, UK, and a Faculty Fellow of the Alan Turing Institute, London. He develops and applies techniques from computer science to research questions in both computer science and the social sciences. He focuses on the use of machine learning, natural language processing, and network analysis for collective action, multilingual understanding, and data visualization.

Kenneth Arnold is a doctoral student in Computer Science at Harvard University. He develops and studies systems that use artificial intelligence to empower human productivity and creativity. Ken co-organized the Boston Indigitous missions hackathon and gathers Boston Christians in technology and ministry to explore ways to build God’s kingdom together.


Photo of Marc Chagall’s America Windows CC-By-NC 2.0 by Christopher Rose.

Understanding Artificial Intelligence

Lattimore, R. A. (Ed.). (1951). The Iliad of Homer. University of Chicago Press.

Minsky, M. (1965). Matter, Mind and Models. Artificial Intelligence Memo no. 77.

Picard, R. W., & Picard, R. (1997). Affective computing . Cambridge, MA: MIT Press.

Russell, S., Norvig, P., & Intelligence, A. (1995). Artificial Intelligence: A modern approach. Engelwood Cliffs, NJ: Prentice-Hall, Inc.

Relating to Machines That Learn

Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016). Quantifying and reducing stereotypes in word embeddings. ArXiv Preprint ArXiv:1606.06121.

Horton, H. (2016, March 24). Microsoft deletes “teen girl” AI after it became a Hitler-loving sex robot within 24 hours. Telegraph.

Hutson, M., & Pm. (2017, April 13). Even artificial intelligence can acquire biases against race and gender. Science Magazine.

Kirchner, J. A., Surya Mattu, Jeff Larson, Lauren. (2016). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica. Retrieved from

Optimism and Fear in Media Attention Toward Artificial Intelligence

Agre, P. (1997). Toward a critical technical practice: Lessons learned in trying to reform AI. Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide. Erlbaum. Retrieved from

Devils, F., & Panics, M. (1972). Folk Devils and Moral Panics: the Creation of the Mods and Rockers. London: MacGibbon and Kee.

Watson, S. (2016). Toward a Constructive Technology Criticism. New York, NY: Tow Center for Journalism, Columbia University. Retrieved from

Christian Leadership in Artificial Intelligence

Street, 1615 L., NW, & Washington, S. 800. (2017). The Changing Global Religious Landscape. Washington, DC, USA: Pew Research Center for Religion & Public Life. Retrieved from