PaRappa the Rapper / Masaya Matsuura

Chinese Room Takeaway

Wiping the anti-AI slate clean with toilet paper

Jason Hutchens
The Magic Pantry
Published in
5 min readNov 5, 2013

--

The Chinese Room Thought Experiment was proposed by the philosopher John Searle to demonstrate that the Turing Test is not a sufficient indicator of genuine understanding.

In the experiment, Searle assumes the role of a “human computer”, executing by hand a program that conducts a convincing Chinese conversation with a native speaker outside of the room.

The program consists of three large batches of Chinese writing and a book of rules; idiosyncrasies resulting from Searle’s attempt to duplicate the functionality of SAM, a “story understanding” program written by Roger Schank and his colleagues at Yale in the 70's.

Searle concludes that even though the Chinese Room passes the Turing Test, his own lack of understanding of the “meaningless squiggles” of the Chinese language means that it doesn’t truly understand anything.

A New Kind of Turing Machine

Searle’s use of Chinese pictograms as the “formal symbols” of the system can be misleading, so we’ll reformulate the Chinese Room as a Turing Machine. As Weizenbaum pointed out, a Turing Machine can be implemented in all manner of seemingly trivial and ludicrous ways, including as a sheet of toilet paper covered with pebbles.

Imagine a room inside of which is Searle, a limitless supply of toilet paper, a bottomless bucket of small, smooth pebbles and a library of rule books. We’ll call such an arrangement a Searle-machine.

In order to execute a program, Searle gazes at the contents, or lack thereof, of the first square of the toilet paper. He consults the starting rule, by flicking to the first page of the first book, which will tell him

1. to take a pebble, drop a pebble or leave things as they are;
2. to change his gaze to a neighbouring square of paper, or not; and
3. where to find the next rule in the library of books.

Searle follows this 1-2-3 algorithm unceasingly, shuffling pebbles about at a frantic pace, while the human being stationed outside the room continues to receive all the requisite signals to convince them that their interlocutor is intelligent. The thought experiment is alive and kicking; no single part of the system itself can be said to possess any kind of understanding whatsoever.

The Solipsist Homunculus

It is clear that Searle’s role in the system, of performing repetitive, mechanical pebble shuffling, could easily be replaced with a simple robotic arm. So why did Searle feel the need to become part of the system in order to declare the system as a whole as lacking true understanding?

One can only feel that this is a solipsist attitude. Unable to be convinced that understanding exists in the system by observing its external behaviour, Searle imagines what it would be like to become intimately entwined with its innards, using his presence within as a means to draw conclusions about the system as a whole.

That such an offensively simple system as a robotic arm shuffling pebbles about as it traverses a sheet of toilet paper can hold an intelligent conversation with a human being may convince us that machines will never properly understand anything. But we should not be so hasty, for a Searle-machine is capable of performing a vast array of non-trivial computations. To deny this would be to deny the universality of digital computers.

Imagine connecting the Searle-machine to a large widescreen TV and surround-sound system, each of which have been modified to scan the arrangement of pebbles on particular sections of the toilet paper for their data. Imagine also that a video game controller has been wired up to deposit pebbles on other sections of the paper as it is manipulated. With an appropriate initial arrangement of pebbles and collection of books, a frantic Professor Searle could compute the latest video game, unaware of the enjoyment experienced by the afficiando playing it.

A Chronic Concern

The only problem with such an arrangement is that it would literally take the Searle-machine millions of years to complete a single iteration of the game loop, and most modern games perform sixty such loops each and every second.

The Chinese Room thought experiment rests upon the assumption that Searle himself, in the role of the “human computer”, doesn’t perceive the understanding that the behaviour of the system suggests. That it would take a Searle-machine many decades to produce a single response within a longer simulated conversation begs us to question whether this assumption was a reasonable one to begin with.

Imagine sitting in front of a cathode-ray tube that is displaying a photograph of Alan Turing. Indubitably you would happily say that you perceive a photograph of Alan Turing; you may even say that you perceive Turing himself. Now imagine slowing down the mechanics of the television set by a factor of a billion, which is the speed ratio between a Searle-machine and an ordinary desktop computer.

Instead of watching as the electron beam re-draws the display sixty times per second, you would watch it re-draw the display once every two hundred and thirty-one days. At any point in time, the CRT would be displaying a single bright dot of a particular colour; the rest of the screen would be black.

It would clearly be impossible for a human being to perceive a photograph of Alan Turing on such a television set. Yet the only aspect of the system that has changed is temporal; speed up the television set by a factor of a billion and the image of Turing would flicker back into view. Why should we presume that a human being should be able to understand a computation that has been slowed down by a similar factor before we declare that the system itself understands?

Simulation or Duplication

Searle argues that advocates of Strong AI have missed the point insofar as computer simulations are concerned. A computer simulation of a thunderstorm, for example, cannot be expected to duplicate a real thunderstorm; there is no threat of real wind and rain originating within the computer and devastating the room in which it is housed.

This question harks back to the very philosophical quagmire that Alan Turing was attempting to avoid by introducing his behavioural test for intelligence; a question that he called “too meaningless to deserve discussion”. Without a means to differentiate actual understanding from simulated understanding, we must declare the two identical.

Our experience of the world seems to us to be direct, yet everything we sense comes to us via streams of “meaningless symbols”, as Searle would have it. Semantic information isn’t a property of the external world; our “direct experience” of a photograph of Alan Turing is a fraud perpetrated by millions of light-sensitive cells in our retinas and interpreted by the warm grey lump of meat inside our skulls.

We should not be surprised to peel back the onion only to find more layers of onion. It may be syntax all the way down, and our personal sensation of understanding may be something we will all attribute to machines of the future, however reluctantly.

--

--