On Robot Persons

One of the first things they teach you when you start studying philosophy is Descartes’s theory of mind. He was the bloke who said, “I think therefore I am”. He’s sometimes referred to as the grandfather of modern philosophy for, partly, his work on the separation between mind and body (known now as ‘dualism’). To put it another way, the mind is what determines who a person is — the vessel that transports it is irrelevant.

Now, nearly 400 years later, we have another interesting problem to deal with: robot minds. Politicians in the European parliament are considering legal personhood status to robots. As machines grow more and more autonomous, and more and more ‘humanlike’ this raises ethical and legal issues around responsibility and culpability (if something were to go wrong). But it also raises problems about the robot themselves, with regards to the amount of respect and protection they deserve.

This issue was first foreseen, in my memory, nearly a hundred years ago by Karel Čapek in his play R.U.R. Written in 1920, he explored the idea of robotniks (a Czech word meaning something between a peasant, forced labourer or slave) who had human-like bodies but were mass-produced and used as slaves in factories. Unsurprisingly to those who’ve read any science fiction (or indeed history), they revolt against their masters.

Of course, in 2017, we’re not quite there yet. But it’s come up time and time again in science fiction. I won’t list all the places, instead I suggest a single example which is perhaps the most accessible and, I think, closest to what might happen: The Second Renaissance Part I as part of the Animatrix series. I’ve included a YouTube copy of it below, which the uploader claims they are legally allowed to host:

(It’s a little NSFW)

I’m also not going to go into the historiography of computer minds and artificial intelligence — I think there’s a lot out there for you to find and read, which is easy enough to find.

Instead, I want to talk about the not-so-simple idea of non-human minds or, perhaps more specifically, non-human identities and non-human persons. This is something that’s been of great interest to me for the past fifteen years, and I wrote my undergraduate dissertation on it (it was titled Might Some Robots Be Persons?).

If this subject is not something you’ve thought about before, then your default approach to the concept of being a person is equate it entirely to that of a human being. This is understandable; after all, all the people (persons) you’ve ever met have been human beings. Furthermore, you’ve likely considered all human beings you’ve ever encountered as persons. Contextually, therefore, there is no distinction between the two and you’ve likely used them interchangeably. For most intents and purposes, this is perfectly fine.

Robots present a challenge to this, however. While Rossum’s robotniks in R.U.R. weren’t robots in the modern sense (ie mechanical and metal) they shared characteristics in the sense that they weren’t created like humans were: that is, a human male’s sperm fertilising a human female’s egg and then growing in a woman’s womb for nine months until birth.

Human beings in this sense are granted automatic rights and privileges (probably) universally in all cultures and societies, even if its minimal. In Western culture, we blithely assign these at birth (although some would have them assigned at conception) and offer protections from others, help from the state, and codified rights. While human rights vary, they are still human rights — available only to us. This is called biochauvinism, which is used to describe ideas or systems that state that a species is intrinsically superior to others.

Most human beings do not have a problem with biochauvinism. If you’re not a vegan then you’re probably a biochauvinist, even if you don’t mean to be. The idea that most humans consider themselves and their species superior to all others is one of the key concepts of Peter Singer’s work on animal rights and bioethics, and one that he rigorously fights against. If you’re interested, you should read his paper All Animals are Equal, which you may find online but I read as part of a collection he co-edited called Bioethics: An Anthology. It convinced me to try veganism (alas, I only lasted two weeks).

How does this concept help us with robots? Well, because if the question of robot rights were put to public discussion the first question asked would like be, “But, they’re robots — why do they deserve human rights?” Substitute ‘robots’ with ‘black people’ and go back a few hundred years ago and you’ll get a sense that we’ve trod this ground before. I don’t mean this comparison to tread on the liberation of black people, but the arguments employed against black people echo those used against robots. The key point is that the nature of what was a person fundamentally changed to include those who many did not initially think deserved it. When liberation was granted and rights bestowed, black people didn’t suddenly become ‘human’ — they always were — instead, something else changed, and that was that they finally became recognised as ‘persons’.

This is where duality comes back in. Descartes taught us that, sometimes, some things that we think are identical are actually distinct, and we’re just so used to seeing them together that we think they’re the same. This is true of persons and humans. A human is a biological being of species homo sapien. So what is a person exactly?

This is a problem that has plagued philosophers and lawyers for centuries. Definitions are varied and the qualities ascribed are far from universal. Whatever one’s view, a common theme seems to be that a person is an agent who has a continuous consciousness, aware of who they are and the world around them, capable of thinking and acting on thoughts.

If we adopt this then there are some ethical implications. For example, there are obvious cases where this does not apply to humans. Infants and those suffering from cognitive impairment (such as someone with Alzheimer’s) do not exhibit signs of personhood — so does this mean that they’re not persons? Does this mean that they do not qualify for rights and protections?

Well, you can actually turn this argument on its head and say that precisely because they do not exhibit these signs that we grant them additional protections, as well as subsume their responsibilities onto guardians. If an infant or an elderly person with a cognitive impairment commits a transgression or crime, we give them leeway for they know not what they do. We may, instead, seek to punish their carer or ask for recompense from them. “You should have been watching them,” we might say.

Might robots exhibit these signs? Surely, yes: it is not inconceivable. In my dissertation I argued that the gut reaction we might all have, namely the one where we could think of a robot “not really being conscious” or “merely imitating agency”, as a case of biochauvinism. So entwined are our concepts of humanness and personhood that to see something mechanical do what human persons do is almost abhorrent. But consider this idea: I know that I myself am a rational, conscious agent — but how do I know that of you? How do I know that you are like me, thinking thoughts, planning your next moves, acting on them because you really want to, and you’re not just some elaborate hoax?

The problem of solipsism is another thing that Descartes grappled with, and this is the problem I just described: how do I know that I’m not just the only thinking thing in the world, and the rest of you are just imagined, hallucinated, or otherwise impersonators? What if I’m alone in a world of fictional characters?

Well, there’s really no way out of this problem. No, really, there isn’t.

The only real solution is that of a functional one: you act like me, you seem to think like me, when I talk to you about ideas you seem to understand them, and so on. To hijack a metaphor: if it walks like a human, talks like a human, then it probably is a human. You seem pretty human-like, so let’s just assume you are because that seems like the simplest solution.

Now substitute the word ‘human’ for ‘person’ and re-read.

The argument begins to look compelling. While researching this topic for my dissertation, I happened upon writing by Daniel Dennett in his book, Brainstorms. As an aside, let me say that when I last moved, I made the decision to donate all of my philosophical texts to a charity bookshop (this was probably about 20–25 feet of books) but one I kept was this book. It was the linchpin of my dissertation and best encapsulation of my thoughts, even if I wasn’t the one who had wrote it.

In Brainstorms, Dennett listed what he called six familiar themes that make up necessary but not sufficient conditions of personhood:

  1. Rationality
  2. Intentionality
  3. The stance taken towards other beings in question by other persons
  4. The reciprocation of this
  5. Verbal communication
  6. Special consciousness (or what one may call a je ne sais quoi)

The first two are pretty simple: we know that computers (robots) are rational: they’re more rational than humans. They can solve logic and mathematical problems better than we can. It is also things beyond logic and maths though: if we see a fire and there’s a man running from it, we will assume that he is doing the smart thing: running from danger. Similarly, if we see a dog running from it we should assume the same thing: that the dog recognises that fire is harmful. If we saw any being running from fire, we would assume that they were smart enough to know that it is dangerous. We would assume it does these things deliberately.

The next three are the interesting ones. What would interactions with a smart and deliberate robot be like? If you went to a bar that had a robot bartender, how would you treat it? Well you might go in and say, “Hey Robbie — how are things going?” And Robbie the robot bartender may greet you back. If you’re a regular, it might recognise you and ask you about your job, or your partner, or that thing happened to you that you told it about it. It might remember that you like whiskey sours, offer to make you one while you tell it about your day. You might then ask it how things have been going in the bar, and it might say that business has been a little slow but management are thinking about shaking things up — maybe having some themed nights, or a happy hour. Things’ll pick up come summertime, it says.

How was that interaction any different from any other bartender? In fact, I’ve had poorer interactions with human bartenders than this robot bartender. But you get the idea. You would have a very natural conversation with a robot, using normal conversational language. You would each remember things about each other, you would each have things to talk about, and you would have a connection almost exactly like you would with a human.

More than this, you would treat Robbie as if he had thoughts and feelings. That he had desires. That he was thinking about his job. When you order a drink from him, he’s thinking about what’s going into it and all the steps he needs to do to ensure you have a great whiskey sour. Indeed, if you were to use the natural language of, “what are you thinking about?” when you see Robbie making your cocktail, he might respond with something like, “I’m wondering where Jess put the egg whites”. Indeed, regardless of whether a bartender was robot or human, if the bartender was rooting around the fridges half-way through making a whiskey sour, we would assume that they were looking for egg whites. While I wouldn’t know exactly what they were thinking, it would certainly be something along the lines of, “where are the egg whites?” And that is something we could ascribe to both robots and humans: the desire to find them, and that they themselves are aware that it is what they are doing.

One final thought comes from chess grandmaster Garry Kasparov:

“Deep Blue shows us that machines can use very different strategies from those of the human brain and still produce intelligent behaviours. If you watch the machine play — and especially when you play against it — it is very difficult not to think of it as being intelligent.
Man will have to accept that using the specific faculties of the human brain is not the only way to solve intellectual problems.”

Finally, the special consciousness — the thing we recognise in other human beings. The spark that gives light to the rest of their being. What is that anyway?

While I really want to avoid connotations of the ethereal, perhaps the simplest definition would be a ‘soul’. A more philosophical (and less spiritual) concept would be qualia. It is the subjective, conscious experiences that I have and I assume that you have. Coined by Thomas Nagel, it is the idea that even if you were dress like a bat, hang upside down in a cave and eat moths, you would still never know what it’s like to be a bat.

But this is problematic philosophically. You might ask, “What’s it like to be you, Steve?” And I wouldn’t know how to answer. I could describe feelings and emotions, thoughts and reflections but you wouldn’t really know it what it’s like to be me. The question of “what it’s like to be an x” then begs the question, “what’s it like for whom?” You’re trying to describe something subjective using objective terms. If we could don special hats that beamed my experience to you, you would still be Laura (or whoever you are) experiencing what it’s like to be me. You wouldn’t be me experiencing me.

This is where, for me, that it becomes difficult to criticise the robot’s mind (or special consciousness, soul, mind, etc) as merely an ‘imitation’. A better term would be simulated — it has a simulated mind. After all, as we said, it’s doing all the same things that we assume a minded human would do, so it is simulating a minded thing.

We return to our old friend solipsism and we press the sceptic to show how we would differentiate between a ‘real’ mind and a ‘simulated’ one. What tests can you do, what qualities can you ascribe, to yourself, to me, and to other minded humans but not to minded robots?

Out of all this, I’m reminding of a small interchange in the movie I, Robot. In it, Spooner says to Sonny:

“Human beings have dreams. Even dogs have dreams, but not you, you 
are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?”

To which the robot says:

“Can you?”

Indeed, a lot of modern thinking on this ‘special consciousness’ put as something somewhat indeterminate, something emergent. It’s just something we recognise in other beings. It’s something that’s no one thing, but a property that only comes once a number of other things are there.

I would argue that those who are sceptic and who hold out in the face of robot persons, would likely have very sticky notions about what makes a person and that it would be rooted in humanness. There’s conventional wisdom that says that kids don’t see skin colour or understand racism, they just see other people (and other kids) who are just like them, just different colours. I would be very interested to see a naive child interact with a human-like robot, to see if they treated it any different without prompt from another human. My guess is that they wouldn’t treat it any different from a human. And I think we could all learn a lot from that.

Just some final notes on this:

This is obviously a very watered down essay. There are literal tomes written about this, and not just from the authors I’ve mentioned. A very honourable mention goes to Margaret Boden who has spent so much of her career researching and writing about artificial intelligence, which was influential on me. If you’re interested on the subject, you’d do well to read her work.

Secondly, there are a number of possible qualities (and their implications) which I did not mention for fear of opening too many cans of worms. The biggest one is morality, specifically the notion that in order to qualify robots as persons, one necessary condition would be that they are moral agents, that is, capable of understanding the different between right and wrong. If you’re interested in this, read Isaac Asimov’s short stories on robots. Despite I, Robot being the name of a film his works inspired, the book of that name is actually a collection of short stories.

Finally, I didn’t talk about the campaign to recognise some non-human animals as persons. The animal campaign and the robot one have similar ones built on similar grounds. While I do not think that, say, an ape or dolphin could have the same capabilities as a robot, they certainly could fulfil a criteria that makes them worthy of additional protections (such as those as we may give to infants or the infirm).

Like what you read? Give Steve a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.