John Wyatt on Personhood and AI

How does personhood relate to artificial intelligence?

Subtle Engine
Subtle Engine
14 min readDec 26, 2017

--

Maija Tammi’s photograph of Erica, One of Them Is a Human #1

Personhood was tested in an unlikely arena in 2017, as the National Portrait Gallery shortlisted Maija Tammi’s photograph of Erica in the Taylor Wessing Photographic Portrait Prize, which calls for portraits “with a living sitter”.

John Wyatt, Emeritus Professor of Neonatal Paediatrics at UCL, is co-leading a Faraday Institute project on the implications of artificial intelligence and robotics for personhood and kindly agreed to be interviewed in December 2017.

This interview touches on Christian understandings of personhood and the implications of intelligent machines, as well as how the church might respond to AI and robotics, internally and as part of wider society.

Introduction

Q Could you introduce yourself and your interest in personhood and artificial intelligence?

My background is as a clinician — a paediatrician — I focused on the care of newborn babies. I went into the area because I love children, and it was a fascinating area: rapidly advancing, with a lot of technology used to help very sick babies and very premature babies to survive.

As I worked in the area, I realised I was in the midst of an ethical maelstrom with very fundamental ethical questions raised about the appropriate use of technology. For example, whether a premature infant was a fully human person or some kind of potential being, or issues to do with late abortions at the same time as we were rescuing very premature babies.

So I became increasingly interested in ethical dilemmas in medicine, particularly raised by advancing technology, which led to a focus in medical ethics. I’ve now retired from my work as a paediatrician and am focusing on medical ethics and the ethics of issues raised by advances in technology.

A fundamental question, which is raised time and time again is: what does it mean to be human? As technology advances it tests traditional philosophical and theological ideas — such as the idea of the person — to destruction, it asks: are you still a person if you’re born at 23 weeks gestation; or if you’re in a persistent vegetative state; or if you’re a hybrid organism containing animal and human cells?

Enhancement technology, artificial intelligence and robotics are becoming the next challenge to a general understanding of what it means to be a human. So I’m currently leading a research project at the Faraday Institute, funded by the Templeton Foundation into the human implications of AI and robotics.

Q I’d like to explore some of the issues you mentioned in a few moments, but first, existing theological work on technology seems quite scarce — have you found any theologian’s work helpful when thinking about technology from a religious or Christian perspective?

I agree it is a surprisingly under-developed area. I’ve found Jacques Ellul’s work very fruitful. He was writing in the 50s and 60s and was very influenced by Marx and political ideas of the time, but I sense he is profoundly relevant to modern debates. I hope there will be increasing interest in his work.

Another person I found very helpful is Canadian philosopher George Grant, a Catholic who wrote in the 70s and 80s about the early years of computer technology and the fundamental direction in which it was going. He emphasises that technology is about mastery of nature, and of human nature.

But I’m struck by the lack of thinking in this area, compared with the amount of work that’s been done on the relationship between Christianity and science.

Q Your experience as a medic must put you in an interesting experience to think about ethics in technology in general, which is such an actively discussed area at the moment. Do you follow that debate at all?

I think I do have a particular perspective. What you realise in the world of medicine is that so many human beings are not the ‘ideal human subject’ that scientists and technologists often think about.

The normally functioning young male is the standard of what technologists so often think about when they think about being human, whereas a medical background makes you realise that human beings come in so many different forms, and so many human beings are actually impaired. Being impaired or disabled is not an unusual phenomenon. We’re all of us impaired in different ways and we’re all vulnerable — human beings are profoundly vulnerable.

Particularly as a paediatrician, you tend to see yourself as an advocate for the vulnerable, the powerless, those without a voice, so when I see very powerful technology being developed and see technologists at the forefront, immediately a thought is: how is this relevant to the weak, the vulnerable, and the disempowered, and what will it mean for their lives?

About Personhood

Q Is understanding personhood a more well-trodden area for theologians, the idea that people are made in the image of God, rational, and so on?

Yes, there is a very rich tradition. The concept of a person is really a theological concept, which comes out of the Church Fathers’ work in formulating doctrine. The Cappadocian Fathers developed our understanding of the Trinity: the persons of the Trinity were defined or constituted by their relations, by being in union and communion — and that human persons were a reflection of the divine persons.

Since then there’s been continual discussion about the nature of personhood and what it means to be a person. I’ve found this ancient understanding of personhood extremely fruitful in my own thinking about medical ethics: recognising personhood even in an extremely pre-term baby, or in a person with profound brain injury, or with dementia.

To put it rather simplistically, if the Cartesian definition is “I think therefore I am”, a Trinitarian version of that is “I am loved, therefore I am”. It is in relation with others that my own being is found.

So there is a rich theological field here. So far as far as I’m aware it has not so far been applied to modern forms of technology. How that deep relational understanding of personhood interfaces with issues raised by advancing AI and robotics — that work is yet to begin.

Q That brings us on to your Faraday Institute project. Can you talk more about the aim of the project, the approach you’re taking, and the outcome you would like to see?

There has been so little serious theological reflection on advances in AI and robotics, so I see the Faraday project as a scoping exercise. The research question is: what does it mean to be human in an age of intelligent machines? What are the social, psychological, philosophical and religious implications on human identity of intelligent machines?

My particular interest is the theological implications. We’ve had two workshops of theologians to identify some of the underlying theological and philosophical issues. I hope this will result in a multi-author book on a range of different theological issues raised by advances in AI and robotics, as well as hopefully some academic papers — but this work is still at a preliminary stage.

The more I’ve gone into it, the more complex and diverse the questions are. I sense that AI is rapidly becoming so completely ubiquitous it will almost be like electricity. Where do you start when you talk about the ethical implications of electricity? Electricity touches almost every aspect of human life—and the same is going to be true of artificial intelligence.

Q I like the simple way of understanding our relationship with technology which says that first we shape our tools, then they shape us. Presumably that’s always been true from our first tools — what’s novel about today’s technologies?

Yes, it’s important not to over-emphasise the novelty of the current situation. This is the outworking of a very long historical process of tool-making and so on. But I think AI does raise some extremely interesting and challenging questions. One way of looking at it is as a two-way psychological movement between the machine and the human.

First, there is a movement from the machine to the human: we understand ourselves increasingly as machines. The machine gives us an insight into what it means to be human. The more sophisticated the machine, the more powerful that psychological movement becomes.

We live in an age dominated by information processing machines, and that way of thinking has become very powerful. So the cell has something like a hard drive which carries gigabytes of information, processed within cellular components. The brain is an information-processing machine, there must be core storage, processing modules, communication buses and so on. Cognitive psychology is the application of computer programming techniques to the working of the human mind…

Of course this can be very helpful — it’s not wrong or dangerous in itself — but it carries within it certain blindspots and ways of thinking.

Second, what’s equally interesting is that there is an opposite psychological movement. We put our own humanity onto these machines: we anthropomorphise. I’ve become increasingly interested in the human tendency towards anthropomorphism and how very deep-rooted this is in our humanity.

I recently spoke at a conference at Durham University with senior church leaders on the theological implications of AI and robotics, during which we visited a university computer lab in which there were four or five little Nao robots on the floor, all with blinking LED eyes, moving arms and talking. Immediately the atmosphere changed: a bishop got down his hands and knees to talk to one robot, people were waving and smiling at the other robots. We immediately engaged in anthropomorphism.

As one journalist put it “human compassion can be hacked”, and the technology companies have worked out how to do it. They make robots which don’t appear creepy or threatening, but childlike and even vulnerable, so we will be attracted to them and find ourselves anthropomorphising.

Anthropomorphism is not under our conscious control, it’s so deeply rooted in humanity it goes back to the very relational nature of being human. But it’s also wide open to abuse and manipulation.

Another fruitful line of enquiry is the distinction highlighted by Martin Buber between the I-it relationship and the I-you (or I-thou) relationship. Buber came from an existentialist and philosophical point of view, and was an Orthodox Jew of the Hasidic tradition. He grasps for something profound — almost inexpressible — about the I-you relationship, and makes the point that as human beings we start with I-you relationships: the very first relationship a developing child has is an I-you relationship. The I-it relationship comes subsequently, as the child works to differentiate between a you and an it.

In contrast to this profound way of thinking about I-you relations, an instrumentalist understanding of relationships seems common today, particularly among technologists. So a relationship is something that: makes me feel good; which gives me some purpose in my life; which evokes warm feelings — and that’s what relationships are for.

If a machine is capable of evoking a similar response within me, well what’s the problem? It’s simply the same thing, only instead of a relationship with human being, it is now a relationship with a machine. What’s the difference?

Imagine your elderly grandmother is lonely and depressed because no one comes to visit her. Then all of a sudden she has this wonderful friend who is always there for her and who makes her feel better. Her mood improves, she becomes much more outgoing and positive. Does it matter if it’s entirely clever programming? If it does, why does it matter? Who is being harmed if your grandmother’s mood is being improved by a simulated relationship? Nearly all technologists I discuss that with say “obviously it doesn’t matter”. When I talk to other people, quite a lot of them say “that’s really quite disturbing, but I can’t really put my finger on why it’s disturbing”.

I think we need to recover this difference between the I-you and the I-it relationship, and express it well in a world which may be increasingly dominated by simulated relationships. The machine is in ‘it’, but it feels like a ‘you’, as though there is someone there. I see this as one of the intense confusions that we’re going to have.

The Church and Intelligent Machines

Q You mentioned a lack of concern among Christians and reluctance to engage with the issues raised by artificial intelligence — what argument would you make to convince Christians to look at AI more thoughtfully?

I was very influenced by John Stott, when he was Rector of All Souls Church. One of his concepts is what he called “double listening”: we have not only to listen to the voice of the Spirit and to scripture, but also to the world and the questions and challenges that it is raising.

I would say if we are listening to the world and trying to understand the zeitgeist, then surely AI, robotics and information technology represent some of the most extraordinary and ubiquitous challenges that are coming. It is a failure of Christian leadership if we don’t recognise that and start to respond to it.

So often the church has found itself on the back foot. Extraordinary changes take place in society, and all of a sudden the church wakes up and says “hang on a minute, how did this happen, and why haven’t we thought about it”. By that time the whole agenda has been set. I think there is an opportunity here for the church to be proactively engaged in the discussions taking place.

One of my colleagues attends a lot of technology events on AI and robotics as a representative of the Faraday Institute of Science and Religion. She gets two common reactions. One is amazement: “what has religion possibly got to do with technology?”. The second response is hostile, in which religion is seen as negative, Luddite, opposed to technological advance. So it seems to me there is an enormous amount of work to do.

Having said that there is, I think, an openness. One of the things that strikes me is that there are very significant numbers of Christian believers working in the fields of technology, computer science and related disciplines. I think there is an opportunity for thoughtful, informed, engaged Christian voices to contribute to the conversation, at a time when fundamental thinking and ethical frameworks and regulations are being evolved.

Q What kind of response would you like to see from the church, as the development of technologies like AI progresses?

Unfortunately the history of the Church with evolving technology has been rather unimpressive (with a few notable exceptions). Often the general Christian approach to new technology is initial hostility and incomprehension, followed by grudging acceptance, then complete assimilation. Take for instance television as a technology. When I was a child, television was regarded by many Christians as being an evil or hostile invention which we should resist, then it was allowed under special occasions, and so on — but if you fast forward 50 years, it would seem that Christians have no different understanding about television compared to anyone else in our society.

So I’d like to see a thoughtful analysis of new technology from the church, avoiding the two extremes of implicit hostility or thoughtless assimilation, and trying to reflect critical and thoughtful engagement with technology.

But I think we have to do the original theological work to understand AI and robotics in the light of the Christian revelation.

Christians have been debating issues of life and death and politics and just war and all these things for thousands of years, but when it comes to intelligent machines there is no tradition of Christian thought to build on.

The danger then is that we have superficial and unthought-through responses. So I do see the need for serious Christian reflection which finds a way of taking the history of Christian thought and biblical theology and narrative, and then connecting that to these explosive developments in technology.

I also think there are going to be many casualties of rapid technological innovation. Technological unemployment for example will raise considerable pastoral challenges in helping those who are left behind. That’s also an important area in which the church must be aware of the huge social changes that are taking place, and likely to take place in the next decades.

Q Is there another extreme to avoid? That we focus so much on the mid- or far-future risks that we don’t examine our use of technology today?

I think that’s absolutely right. Another under-emphasised point is that, philosophically, Silicon Valley comes out of this extraordinary positive, can-do, optimistic understanding of reality and of human society.

If you look at the history of the internet and world wide web, it was going to create wonderful new opportunities. But few predicted the evil consequences: computer viruses, cyber crime, phishing attacks, social media abuse and so on, because there was a wave of optimism about technology.

I think it’s interesting that the techno-optimist worldview doesn’t really have a category for evil. It has a category for mistakes, for bad programming, for accidental catastrophe, but it doesn’t really have a category for malevolence.

I think that my real concerns about AI and robotics are not so much to do with Terminator-style super-intelligence, but more to do with what evil human beings could do, how they could subvert and abuse this technology — in ways which at the moment we find almost impossible to imagine.

This is an area where religious people and Christians have something to contribute because we do believe in evil, we do believe in the Fall, and therefore being aware of adverse possibilities and thinking through the possible evil consequences, taking steps to try minimise the harm that might result, I think is a very important part of our contribution to society.

The Church and Wider Society

Q Wider society is going through some growing pains with technology at the moment. Can a Christian perspective help or contribute?

The secular meta-narrative is becoming that we start from slime, we evolve into our current human state, then we create machines that increasingly outstrip us, and that in a way that is our purpose. The purpose of existence is to progress and evolve and allow technology to move on to the next stage.

I think in response Christians need to find ways of expressing their own very different narrative, which starts before the foundation of the world. The narrative of creation and fall, of the God who intervenes and supervises his creation, and who enters into creation in the incarnation, of the God who redeems evil and transforms it into good, and that God is ultimately going to intervene again, in the creation of the new heavens and the new earth (and technology perhaps plays a part in the creation of the new heavens and the new earth).

So I think telling that story, finding ways of articulating an alternative narrative, is the challenge in the technological era at the moment. There are very few people around who seem able to articulate that narrative.

Q Mustafa Suleyman, DeepMind’s CEO, recently wrote in the FT: “All of us who believe in the power of technology must do everything we can to ensure these systems reflect humanity’s highest collective selves”. Do you think society has a shared understanding of humanity’s highest collective selves, and—again is this something the church has a role in developing?

Well certainly yes to the second part.

There is a certain naivety about technologists when it comes to issues of fundamental philosophical and ethical realities, because it’s not their background, it’s not their way of thinking; philosophy and theology are generally not part of a technical education.

I think there are extraordinary complexities here and the technologists unfortunately can be naive about the issues of human values and human flourishing. I think, yes, the church does have great wisdom and theological resources, but at the moment building a bridge between the world of technology and the world of Christian thought is extraordinary difficult. There is no simple connection between 2,000 years of Christian thought and reflection and how to design computer code to ensure that AI systems will reflect human values.

I think it’s very important that the church is recognised as having a role — but in order to have a role we have to be informed and we have to be thoughtful.

Many thanks, John.

John’s talk at the Being Human: AI, transhumanism and SETI conference, part of the Equipping Christian Leadership in an Age of Science project at St John’s College, Durham University, can be watched here.

To hear about future interviews or other blog posts, follow Subtle Engine on Twitter or sign up for occasional email updates below:

--

--