Ethics of AI: The Chinese Room

strong AI, and how can we recognize it

Devin Caplow Munro
9 min readApr 25, 2017
Photo Credit: Google Deep Dream

Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so you could enter it as if it were a mill. This being supposed, you might visit inside; but what would you observe there? Nothing but parts which push and move each other, and never anything that could explain perception.

- Gottfried Willhelm Leibniz

What Leibniz illustrates is philosophical problem known as the Mind Body Problem. Simply put, it asks how the mind and the body are related. It asks how our brains make us experience the world so that there is something inside us that looks out.

When Leibniz proposed his loom it was purely hypothetical. He never expected to actually step inside a brain and look around. But now it’s possible we will actually have that opportunity.

That quote above is stolen from a book by popular futurist Ray Kurzweil called ‘How to Create a Mind.’ Kurzweil represents a growing camp of technologists and neuroscientists who think that soon we might actually be able to make an artificial “brain” of equal or greater power than our own.

Although neuroscientists are still searching for answers about how biological brains work, computer scientists are already using principles borrowed from the biological world. Deep neural nets power things we use every day like Google translate, Siri, and autonomous cars; and the way they process information bears striking similarities to biological brains.

What if we created an artificial brain that could experience the world? What would it mean for us to look inside and see nothing but automata?

The Chinese Room

First of all, there is an elephant in the room that Leibniz failed to mention. He asks us to suppose a machine that produces perception, and most of us think of a brain, because we know that a brain produces perception. But we don’t know that a mechanized replica of a brain produces perception.

We know a brain produces perception because each of us is perceiving the world using our brains. Each individual’s perception provides evidence of the ability of one brain to perceive to that individual. Most of us freely extend this assumption to brains of other people, imagining at least that the similarities our bodies share indicate similar capacity for consciousness.

The argument from similarity of body does not apply to computers, since they are not similar to our brains, so we don’t have the same assurance that they can produce perception.

Philosopher John Searle illustrated this idea using a thought experiment called the Chinese Room. He asks us to imagine a person in a room, who receives messages in Chinese slipped under the door. The person doesn’t speak Chinese, but the room contains specific instructions about how to respond written in a language the person understands. By following the instructions, the person is able to interact will those outside the room as if they understand Chinese, though they, in fact, do not.

Searle’s Chinese Room

People outside the room might imagine that there is someone inside the room who understands Chinese, but actually the person inside the room has no understanding of Chinese at all.

The thought experiment serves to illustrate the fact that it’s possible to create something that interacts in a way that simulates understanding without having any understanding of it’s own. Searle argues that computer programs are examples of this phenomenon because they follow rigid and predetermined sets of instructions. There is no reason to believe that they are conscious, even if they seem to be.

Though Searle attempts to go so far as to argue that computers cannot be conscious, I use the example here to simply illustrate the uncertainty of consciousness from an external viewpoint. No matter how convinced we are that it can have experiences, we have no way of knowing that an artificial brain is not a Chinese Room. We are confined to looking at artificial intelligence from the outside in.

Strong AI

A more fruitful strategy for defining AI intelligence uses its capabilities. We can sidestep the problem of consciousness by asking instead if the computer is as smart as we are. The term “strong AI” most commonly refers to a computer program that can perform any mental tasks a human can.

Hamlet (2008)

We say that Siri can “understand” spoken language because it can transcribe audible words to text and process the text in a useful way. A strong AI would presumably be able to hold a conversation as easily as a human could.

Avengers: Age of Ultron (2015)

But we shouldn’t pretend that a strong AI would be the same as a human. We don’t know if or how this type of AI might be created, but popular culture has already produced an idea about what it would look like. We tend to anthropomorphize strong AI’s, imagining that the ability to think as well as we think requires similar sensation and emotion.

In reality we have no assurance that an AI capable of thinking as well or better than we do would bear any particular similarity to us. We don’t know enough about cognition to know if emotions, desires, senses of time or space, or any other quality of the human mind is essential to it’s general usefulness and creativeness. This lack of definition renders the term almost useless, since we are still unable to really define what we mean by a strong AI.

The Turing Test

Often called the father of modern computing, Alan Turing was already comparing computers to brains in 1950. At the time the most powerful supercomputers in the world were orders of magnitude less powerful than an original iPhone butTuring believed the fundamental structure of computers would allow us to build artificial minds that rivaled our own.

Turing recognized the issues associated with classifying and detecting artificial intelligence, so he devised a test that could be used as a qualification of the level of artificial intelligence being displayed. He called it the Imitation Game, but it has come to be known as the Turing Test:

The new form of the problem can be described in terms of a game which we call the ‘imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” […]

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

- Alan Turing, 1950

Instead of asking how intelligent the machine is, Turing give the machine a simple yet deceptively difficult challenge. He asks the machine to impersonate a human being. If human can’t tell the difference between chatting (via text) with the machine and chatting with a human, it is said to have “passed” Turing’s test.

Turing asks a question which can actually be answered. There are numerous Turing tests administered every year. The Loebner prize is a competition between AI’s known as chatbots, which are designed to beat a Turing test.

Unfortunately, the Turing test might be solvable without the use of AI that could be considered strong. Instead of building a true thinking program, the approach to beating the Turing test that has so far proved most viable is a sort of “brute-force method. Most chatbots, including Mitsuku, the 2016 Loebner prize winner, employ vast lists of common conversation snippets, and simply use a few algorithmic tricks to string together pre-written responses. When the operation of the chatbot is so easy to understand upon explanation, one is reminded of the Chinese Room experiment. Not all that appears to have understanding truly does.

We have spent a lot of time talking about the problems with defining artificial intelligence, but I’d like to end on an optimistic note.

There still remains hope that we can build a thinking machine. I mentioned Neural nets at the beginning of this post, and I’d like to return to them now because they are amazing.

Neural nets of many different kinds are getting more and more impressive. Google’s DeepMind project recently created a program that bested the human world champion in the game of Go, a strategy game orders of magnitude more complex than Chess, and previously thought unsolvable by computers. Recurrent neural networks allow us to identify various objects in an image regardless of angle and even if they are partially obscured, a task which is so much more difficult than it sounds that it would take another blog post to fully explain how difficult it is.

Deep neural nets are special because we don’t exactly “program” them to do what we want them to do. Instead we teach them. They contain astronomically complex mappings between inputs and outputs, that can be successively updated in response to new stimulus. Here’s an example of what I’m talking about:

Recurrent neural networks recognize and reproduce sequential patterns in information. One algorithm read the entire works of Shakespeare, and then tried to reproduce it. Here is a sample of its output:

VIOLA:
Why, Salisbury must find his flesh and thought
That which I am not aps, not a man and in fire,
To show the reining of the raven and the wars
To grace my hand reproach within, and not a fair are hand,
That Caesar and my goodly father's world;
When I was heaven of presence and our fleets,
We spare with hours, but cut thy council I am great,
Murdered and by thy master's ready there
My power to give thee but so much as hell:
Some service in the noble bondman here,
Would show him to her wine.

KING LEAR:
O, if you were a feeble sight, the courtesy of your law,
Your sight and several breath, will wear the gods
With his heads, and my hands are wonder'd at the deeds,
So drop upon your lordship's head, and your opinion
Shall be against your honour.

It’s important to realize that this particular algorithm wasn’t taught the meaning of the english language or characters. It was simply given a list of all the letters in all the Shakespeare plays in order, and from that it was able to learn spelling, grammar and even stylistic structure. Of course, it didn’t learn the meaning of the words. It would be impossible to converse with an algorithm like this and not notice that it was spouting grammatically correct gibberish, but futurists like Ray Kurzweil believe that the capability of this machine to “babble” is the first step towards a more wholistic type of intelligence.

I can’t rigorously defend this claim, but I think these programs seem like more than just a Chinese Room. There is the assumption in the Chinese room argument that a second person who does understand Chinese must have written the instructions. But a deep neural network can learn to translate English to Chinese without being taught a single rule. Instead it learns similarly to the way we do — by repetition and reinforcement.

Perhaps as we learn more about what makes neural nets work we will find a convincing reason to believe they can have experiences.

--

--