How AI is Shaping Ideas of Sin, Justice, Freedom, and Forgiveness

What can Christians offer a world that expects machines to predict our moral and legal futures?

This post is the seventh in a series of short introductions to artificial intelligence designed for group discussion in non-technical Christian settings. To follow the series, sign up for our email list, hosted by the Oxford Pastorate.

Blessed is the one whose transgression is forgiven, whose sin is covered. Blessed is the man against whom the Lord counts no iniquity, and in whose spirit there is no deceit. Psalm 32:1–2

In Psalm 32, David seeks forgiveness from an all-seeing, understanding God who offers spiritual transformation. In our time, artificial intelligence systems observe human behavior, detect alleged wrongdoing, make predictions about people’s futures, and enforce laws. In the future, AI will likely become a common tool in pre-emptive and responsive legal systems.

The charity Terre des Hommes uses artificial persons generated from the faces of children from the Philippines to attract and entrap people who they believe will be future child predators

In 2013, users of video conversation apps might have been contacted by a 10-year-old girl from the Philippines. When some accepted her offer to perform sexual acts on camera, they may not have been aware that the girl was a computer-generated simulation designed to “catch” alleged predators. The Dutch NGO that created this artificial person argues that sexual attractions to children are persistent and bound to cause harm. By detecting so-called predators with an artificial person and alerting police, they hoped to discover and prosecute people before they harm someone. The artificial child’s creators also held two moral views in tension: that their own actions were justified because the child was not fully human, and that the actions of their targets were prosecutable because their targets may not have known the child was artificial.

Perfect enforcement represents risks to religious freedom, especially when companies continue to re-sell surveillance and law enforcement systems to repressive governments.

Western countries are already adopting so-called “perfect enforcement,” where surveillance informs predictive policing and judicial systems become semi-automated. Currently, most alleged copyright violations and hate speech are handled by corporations, using AI systems to observe and enforce possible violations at scales beyond human procedures of due process. In the UK and US, AI systems direct police patrols and prompt investigations of people predicted to engage in future domestic violence. In the US, the 5% of defendants who do see a judge or jury may find the process guided by predictive sentencing. Because AIs learn from historical data, they risk reproducing and scaling unjust discrimination in human judicial systems. Perfect enforcement also represents risks to religious freedom, especially when companies continue to re-sell surveillance and law enforcement systems to repressive governments.

Christians already shape the uses of AI in justice systems. Early prototypes of surveillance and predictive policing focused on child protection, human trafficking and violence against women, areas of substantial Christian influence. In these areas, many people draw a sharp distinction between victims and perpetrators, categories they hope that AI systems can detect. Given Christian influence, we need theology and practice for a world where human justice is delegated to AI.

  • How should we understand acts of sin or harm against artificial persons?
  • What can Christians offer a world that expects machines to predict our moral futures?
  • As AIs increase the number of people observed and processed in government and for-profit legal systems, how might Christian thought on sin and justice shape them?
  • How can Christians circumvent or resist algorithmic constraints on religious freedom?


Brayne, S., Rosenblat, A., & Boyd, D. (2015). Predictive policing-data and civil rights. See http://www. Datacivilrights. Org/Pubs/2015–1027/Predictive_Policing. Pdf (Accessed 8 April 2016).

Bickert, M. (2017). Hard Questions: How We Counter Terrorism. Retrieved July 3, 2017, from

Devers, L. (2011). Plea and charge bargaining. Retrieved from

Diebert, R. (2016). What to do about “dual use” digital technologies? Ottawa, CA: Canadian Senate Standing Committee on Human Rights. Retrieved from

Kirchner, J. A., Surya Mattu, Jeff Larson, Lauren. (2016). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica. Retrieved from

Matias, J. N. (2015, June 23). Imagining a Sustainable and Inclusive Approach to Child Safety Online. Retrieved June 26, 2017, from

Musto, J. L., & Boyd, D. (2014). The Trafficking-Technology Nexus. Social Politics: International Studies in Gender, State & Society, 21(3), 461–483.

Newbold, J. (2015). ‘Predictive Policing’, ‘Preventative Policing’ or ‘Intelligence Led Policing’. What is the future? Warwick Business School, Coventry, UK. Retrieved from

Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: a comparative case study. Internet Policy Review, 6(2). Retrieved from

Thakor, M. N. (2016). Algorithmic detectives against child trafficking : data, entrapment, and the new global policing network (Thesis). Massachusetts Institute of Technology. Retrieved from

Thornton, S. (2011). Predicting serious domestic assaults and murder in the Thames Valley. Cambridge, UK: Wolfson College. Retrieved from,%20S.pdf

Turing, A. M. (1948). Intelligent machinery, a heretical theory. (unpublished)

Zittrain, J. (2007). Perfect enforcement on tomorrow’s internet. Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, 125–156.