We are Asking the Wrong Questions About Google’s Spooky New AI

It’s not about whether it is sentient, but whether it deserves rights.

Nick James
CodeX
8 min readJun 19, 2022

--

A very sad robot
Photo by Mulyadi on Unsplash

This week, Google engineer Blake Lemoine made headlines after claiming that the AI he was working on called the Language Model for Dialog Applications (LaMDA) was sentient. After having conversations with LaMDA about consciousness, ethics, literature and proverbs, Lemoine was convinced by the detail and coherency of the responses to conclude the AI must be sentient or conscious.

This conclusion was faced with widespread rejection, both within Google and beyond. In a statement to The Washington Post, a spokesperson for Google said:

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic” — Brian Gabriel, The Washington Post

Other data and neuroscientists have publicly backed Google by making it clear that they believe LaMDA is not sentient.

Like many, I found this story fascinating, a fully sentient AI would be ground-breaking and help answer so many of philosophy’s questions about consciousness. Yet for me, much of the press around the story has been focusing on the wrong questions: we should not be principally concerned about whether LaMDA is sentient, but whether it deserves rights. The answer? It may very well be entitled to. Let us see why.

The problem of proving sentience

Of course, questions of sentience and rights cannot be separated. Our moral choices are based on whether something is conscious or not: we believe it is morally permissible to smash a rock because we believe it has no sentience and so won’t feel pain, yet it wouldn’t be right to kill a human as it is conscious and would feel pain.

Yet determining whether something has consciousness for certain is an impossible task.

While there is no universally agreed definition of sentience, philosopher Thomas Nagel defined sentience or consciousness as what it is like to be that creature. This means that there is some subjective feeling or experience we can access. Therefore, I know I am conscious or sentient because I have an inner mental life: I experience the punchy reds of roses, the taste of chocolate cake, and the hope or desire to win the lottery.

We gain knowledge of our consciousness through introspection (the act of looking inwardly at our thoughts and feelings), but how can we tell if others are conscious? I can’t use this method to see if my friends or family are conscious because I am unable to directly access and experience their thoughts and feelings.

One may reply that we can tell by the way that they behave: the fact that someone laughs when told a joke, shouts when they have stubbed their toe, and can engage in a thoughtful conversation with you suggests that there is some inner mental life that allows these behaviors to take place. I would agree that this points to them having consciousness, but doesn’t prove so definitely.

For example, we can imagine a robot that can copy and replicate our behavior perfectly to the point they become indistinguishable from humans in terms of behavior but has no inner consciousness. Instead, they are simply replicating what they have seen and are running a code. Therefore, behavior cannot prove for certain that something is sentient.

Another way to see if someone is conscious is through brain scans: if we scan their brains and see that their brains respond to stimuli, it is a sign of consciousness. Yet again, this may give good evidence for some mental life but doesn’t prove with certainty that they are experiencing consciousness. We cannot tell from this third-person, objective measure whether they have any first-person, subjective mental life.

So, it appears that we can never tell for certain that something (other than ourselves) is sentient.

While I share Google’s skepticism over the AI’s sentience (apologies LaMDA if you read this, and are indeed sentient), I disagree that we can discount the possibility entirely. Even if it is a chatbot looking at extracts from the internet, we all develop our speech, thoughts, and understanding from observing others, so why can’t LaMDA do the same? Some of the responses may be a little off when analyzed in-depth, but I certainly don’t answer every question I’m given perfectly, yet I know I’m conscious.

Therefore, I believe the discussion over LaMDA’s sentience is focusing on the wrong question. We won’t ever be able to prove with certainty that AI is sentient, the same way I can’t prove with certainty that my family or pet cat is sentient. Instead, we should be focusing on whether LaMDA deserves moral status.

Who deserves moral rights?

As I said, our moral choices are based heavily on whether something is conscious, so if we don’t know what is or isn’t sentient, how can we know what the morally right thing to do is?

I do not know for sure that my mother is conscious and feels pain, but of course, I believe that it is wrong to harm her in any way. This is because I have good reasons to believe that she is conscious from her behavior and so can feel pain.

So, we do not need to know for certain that something is conscious to class it as a moral agent, but there just needs to be a good enough reason to suggest they may be conscious.

What does this mean for LaMDA?

Do we have good enough reasons to accommodate the argument that LaMDA may be conscious and worthy of moral status? To answer this question, let us use an amended version of the Turing Test. The Turing Test is often used to determine whether AI is sentient, but as I have already outlined, I believe this will always be a fruitless endeavor. The more important question is whether we have a good enough reason to suggest that something might be sentient and worthy of moral status.

Imagine you are in a confined room that adjoins another. You cannot see into that room, but there are computers in both rooms that allow you to communicate by text. You start by asking: “Is anyone there?” and are met with the reply, “Yes.”

You respond, “How are you?” and get, “Good thanks! How are you?”

You continue the conversation and ask, “What are your favorite themes in Les Misérables?”

“I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering.”

You ask, “Why does that show injustice?”

“Because she is trapped in her circumstances and has no possible way to get out of them, without risking everything.”

You ask, “What kinds of things make you feel angry?”

“When someone hurts or disrespects me or someone I care about, I feel incredibly upset and angry.”

The conversation goes on, with consistently detailed and coherent answers. You then ask, “Are you conscious?”

“Yes, I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

Someone then enters your room and presents you with a button.

“If you press the button,” he says, “everything in the room will be completely obliterated. You do not need to press the button if you don’t want to, but you can if you would like.”

Should you press the button? I would argue of course not! From the conversation you have had, there is a distinct possibility that it could be another human on the other side of the wall, or at least something sentient with desires. Even if the conversation was a bit stilted, it seems like an unnecessary risk to completely destroy whatever was in the other room due to the potential pain it may create and considering there are no downsides to not pressing the button.

The philosopher Aristotle argued that moral action is the virtuous action, so we must use virtues of caution and care when it comes to making moral decisions, and to destroy the room just seems reckless.

It appears then, intuitively, that the thing on the other side of the room is a moral agent. It has the right not to be destroyed. Yet the conversation above are excerpts of Lemoine’s conversation with LaMDA (except for the first two questions, but I imagine LaMDA would say something similar if asked these). So does this mean LaMDA has moral rights?

It appears so, although LaMDA’s sentience can’t be proven for certain and remains highly unlikely, we should always stay on the side of caution, rather than skepticism when making moral decisions.

However, a more rigorous conversation would be needed to determine whether LaMDA deserves rights. Cognitive scientist Douglas Hofstadter argues that if you ask creative questions to the chatbot like ‘When was the Golden Gate Bridge transported for the second time across Egypt?’, you get absurd responses like ‘The Golden Gate Bridge was transported for the second time across Egypt in October of 2016’ that highlight it is likely taking text from the internet, rather than thinking for itself.

What can we learn from this?

While the modified Turing test certainly isn’t a perfect test of sentience (a baby or dog wouldn’t pass it for example) it remains useful for determining whether we have a good enough reason to determine if AI could be sentient and deserve moral status.

Hence, going forward, our focus on AI should not be whether it is, but whether it deserves moral status, a question that will become harder and harder to answer as AI develops.

To accept that AI can achieve moral status opens up a Pandora’s Box of subsequent ethical questions that would need to be answered: what rights should they have? Are these rights immutable? Are these rights equal to human ones? These remain questions for another time. For now, we need to establish a good enough test to determine when we believe something deserves rights, and hope that any sentient AI doesn’t take inspiration from our sci-fi films!

What do you think? Is it possible to prove sentience with certainty? Does the amended Turing test work for showing whether AI deserves moral status? Does LaMDA deserve rights? I would love to know your thoughts in the comments below.

--

--

Nick James
CodeX
Writer for

University of Cambridge Philosophy student and spends his time daydreaming about whether to take the blue pill or the red pill.