Philosophy of AI: The Turing Test and Chinese Room Argument

KTH AI Society
KTH AI Society
Published in
8 min readNov 29, 2021

What is Artificial Intelligence? ‘Artificial’ is a relatively well defined term, but intelligence as a concept is often ambiguous. Many articles will provide a fairly positive outlook on the progress of AI in recent years, and how intelligent these systems appear to be (e.g., GPT-3). However, what determines if an artificial agent is truly intelligent or not? Most wouldn’t go so far as to call modern deep learning models intelligent, but why not? Recent language models are able to produce text which is, at times, indistinguishable from human prose. Perhaps a model only capable of generating text is not sufficient to meet our criteria of an intelligent agent, but what if that agent could emulate other elements of human behavior? Would it then meet the requirements for intelligence? i.e., If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. Or is there something fundamental about human intelligence which simply cannot be modelled through any form of computation? From the Turing test to the scaling hypothesis, these tough questions have been discussed for decades in the AI community.

In this article, I will introduce several fundamental concepts in the philosophy of AI, namely the Turing test and the Chinese room argument, which aim to provide insight into these questions. Both of these thought experiments provide us with a way to think about intelligence in artificial agents. As a consequence, they are naturally built upon several fundamental assumptions about how intelligence and consciousness emerges from the human mind/brain. Central to the Turing test, for example, is the idea that a computational model that behaves indistinguishably from human behavior meets the criteria of an intelligent machine; i.e., it is intelligent in the same way that a human is intelligent. John Searle uses the Chinese room argument to criticize this line of reasoning, and in doing so makes his own assertions about the nature of intelligence. With that in mind, the aim of this article is twofold:

  1. To introduce you, the reader, to several fundamental concepts in AI which are vital to understanding the field
  2. To encourage reflection not only on the intelligence of artificial agents, but also of your own intelligence and consciousness. What assumptions do you make about the relationship between your own body and mind?

The Turing Test

Imagine you are currently engaging in two conversations on an online messaging system. You have been notified previously that one of your conversation partners is a computer and that the other is human. Your task is, through text-based conversation (i.e., passing of written messages), to identify which conversation partner is a computer and which one is a person. If you are able to perform this task consistently and successfully, then this means that the computer does not pass the Turing test. On the contrary, a computer passes the Turing test if it can consistently trick you into thinking it is human.

Simple outline of the Turing test. Both A and B are trying to convince C that they are human. If C cannot reasonably distinguish between the two, then we can say that A has passed the Turing Test. Image source: https://en.wikipedia.org/wiki/Turing_test#cite_note-1

Although the original test described by Alan Turing in 1950 looks different from this standard interpretation, the principle is the same. A computer that is capable of communicating with a person in a manner which is indistinguishable from person to person communication passes the Turing test. What are the implications of this? The fundamental philosophical presupposition that enshrouds the Turing test is simply: If a computer can behave like a human, then it can think like a human. Hence, if a computer can pass the Turing test, then this indicates the existence of an intelligent machine.

Criticisms:

In the age of the internet, potential fallacies in the Turing test can be found more easily than in the 20th century. Cleverly written rules and human irrationality has convinced many people that the chatbots they are communicating with online are real people. Even with chatbots that have no ability to learn and are clearly not what we would consider “intelligent”, it is surprisingly easy to trick people into believing that their conversation partner is real. Even in 1966, the program ELIZA [1], was able to trick many people into believing it was human. Naturally, it’s easy to criticize chatbots and programs like these. It’s fair to say that, given sufficiently rigorous evaluation, most people would be able to tell that the program they’re speaking to is not human.

A more damning criticism of the Turing test, on the other hand, is the Chinese Room argument.

The Chinese Room Argument

Thus far, we have operated under the assumption that a computer that can behave like a human meets the requirements of an intelligent machine. John Searle argues this point and asserts that a computer cannot be intelligent, or at least can not be proven to be intelligent. In 1980, he presented the Chinese room argument to illustrate his point.

Describing the Chinese Room

Imagine you are in a room containing a set of instructions to hold a conversation in Chinese, along with plenty of material to help you follow the instructions (e.g., paper, pens, etc…). Through a slot in the door, you can receive a note containing some Chinese text. Your task is to use the provided instructions to respond in perfect Chinese. So long as the instructions are sufficiently high quality, the note you hand back to the person by the door contains text equivalent to what they might expect a native Chinese speaker to write. To them, they might assume they have been interacting with someone who speaks Chinese.

However, it is clear to see that, although you are producing perfect Chinese text, you don’t truly understand what it is you are writing. Assuming you are not a native Chinese speaker, you do not understand any of the conversation taking place. Now envision a computer that can perform this task. A Chinese speaker can type in any sentence and receive a logical, coherent response in Chinese. If this computer were to be used in a Turing test, it would no doubt pass. However, can we say that this computer is intelligent? That it understands Chinese simply because it can perfectly produce it when requested?

To John Searle, this thought experiment shows that it is not possible for a computer to be conscious and have a true understanding of what the program it is running is producing.

Strong and Weak AI

The two positions outlined by the Turing test and the Chinese room argument are often outlined as the philosophical positions of Strong and Weak AI. The modern conceptions of strong and weak AI refer to the generalizable competence of artificial intelligence, where a strong AI can generalize and perform well on many different tasks and weak AI is only effective for specific tasks. The original philosophical definitions provided by Searle however, are slightly different:

  • Strong AI: Given the previous thought experiment, the machine that truly understands Chinese is a strong AI. In a strong AI, the model of the mind is truly a mind.
  • Weak AI: The model of the mind is merely a simulation of the mind, the machine itself has no consciousness or understanding of its own behavior.

Searle believes that Strong AI cannot exist, and that there is something fundamental about the mechanisms of the human brain that cannot be simulated through just computation. It is worthwhile to note that this does not imply that there is some meta-physical element to human intelligence that cannot be modelled in some way, simply that it is not possible to model this through any form of computational model.

Criticisms

It seems reasonable to assume that an AI could pass the Turing test without necessarily understanding what it is doing, i.e., that it is possible to create a sufficiently powerful simulation of the mind. However, the position that it is impossible to build an AI that can, in itself, be conscious is a more contentious position. We refer to [2] for a deeper discussion of these concepts. Deep at the heart of Searle’s argument lies the assumption that there is something unique about the way the human brain works that cannot be replicated through any computational model. There is not necessarily any strong evidence to believe that this is the case, and this position stands at odds with philosophies of the mind such as Computationalism, which argues that the mind itself is an information processing system. For a further discussion of computationalism, see [3] for further discussion.

In the interest of keeping the post a reasonable length I won’t discuss Searle’s response to these criticisms, but I recommend reading his 1990 article, “Is the Brain’s Mind a Computer Program?” [7], to gain a deeper understanding of his position.

Conclusion

The Turing test is a way to test if a machine is intelligent or not, based on the premise that if a machine’s behavior is indistinguishable from a human then it can be considered intelligent. John Searle challenges this test in the Chinese Room argument, outlining that a machine that exhibits intelligent behavior does not necessarily indicate the existence of an intelligent machine, as a sufficiently complete set of instructions could allow anyone to replicate intelligent behavior. To move past this argument, we need to re-evaluate our understanding of intelligence, consciousness, and the mind. If you are a computationalist, you may stick with the Turing test as a good test for artificial intelligence. If you believe in a fundamental separation between the body and mind, then you may state that strong artificial intelligence is simply not possible, as a mind cannot be created from the physical world.

The question we started this blog post with was “What is Artificial Intelligence?”, which was refined somewhat into “What determines if an artificial agent is truly intelligent or not?”. Unfortunately, we did not get all that far into answering these questions. The central ambiguity of what we mean with intelligence, consciousness, and similar concepts don’t lend themselves to scientific, logical conclusions. It is impossible to prove if an artificial intelligence is truly intelligent unless we make very strong and contentious assumptions about these concepts.

The way forward, perhaps, is to provide clearer, scientifically grounded theories of consciousness and intelligence. If you’re interested in that, a good start would be to read Daniel Dennet’s book, Consciousness Explained [6]. However, as is often the nature of the relationship between science and philosophy, the closer we move towards observable phenomena, the further we move away from the underlying concept we are trying to explain and understand. Perhaps a superintelligent AI will emerge in a few hundred years and solve some of these problems for us. Until then, you can make your own conclusions about AI, consciousness, and the nature of intelligence. And let us know what you come up with, we’re eager to hear your thoughts :)

Author

Nathan Bosch is the Head of Education at the KTH AI Society, MSc student in Machine Learning at the KTH Royal Institute of Technology, and R&D Intern at Ericsson. You can reach him on LinkedIn or by email at nathan@kthais.com

References

[1] Weizenbaum, J. (1966). ELIZA — a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.

[2] https://plato.stanford.edu/entries/chinese-room/

[3] https://plato.stanford.edu/entries/computational-mind

[4] A. M. TURING, I. — COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433

[5] Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417–424.

[6] Dennett, D. C. (1993). Consciousness explained. Penguin uk.

[7] Searle, J. R. (1990). Is the brain’s mind a computer program?. Scientific American, 262(1), 25–31.

--

--

KTH AI Society
KTH AI Society

This is the official account of the KTH AI Society. We write blog posts and provide insights into all sorts of interesting topics in AI!