The Chinese Room Thought Experiment — An Existential Philosophical Problem

Zoe Eng
Words for Thought
Published in
3 min readAug 21, 2022

Along with the rise of artificial intelligence are the everlasting debates and questions about it. We as humans like to know exactly where the line is drawn. When can a computer be considered intelligent?

The Chinese room thought experiment serves as one of the best counters to the idea of complete artificial intelligence. Created by John Searle, a Philosopher at UC Berkley, a version of it goes as follows:

Say that a computer was tasked with convincing a native Chinese speaker that it was also fluent. To do so, it was given a complete program of instructions to translate and formulate a reply. Once the native speaker slipped a piece of paper under the door, the computer would search up each stroke in its program to translate the message and reverse the process to reply. Say that eventually, the computer was able to convince the native speaker 100% of the time. In this case, would the computer know Chinese?

Take some time to think about it. Like many of these thought experiments, there is no right answer. It could be said that the computer doesn’t know Chinese since it is only following its programming. It could also be said that really, communication is just processing information given and formulating a reply. Since the speaker is convinced 100% of the time, does it really matter? From the other side of the room, the native speaker is certain they are speaking to another person, only once you open the door does the perspective shift. Furthermore, perhaps there is a difference between being able to “communicate” in a language and to understand a language. Since the computer has to translate each message, it cannot really “think” in Chinese and therefor does not know the language.

Searle argues the difference between “weak” and “strong” AIs. A weak AI is one that has no conscious thought. It is simply a computer carrying out what it was programmed to do. They only simulate intelligence. On the other hand, strong AIs actually have the ability to think and understand. With the Chinese room thought experiment, it seems that Searle thinks the computer is an example of a weak AI.

The thought experiment can get more complicated though. Say that instead of a computer receiving inputs and giving outputs, there is a person doing so instead. This person has the translation books in front of them and is too able to convince the native Chinese speaker 100% of the time. Do they know Chinese?

[ Now this one may seem easy. Clearly the person does not know Chinese. It’s akin to traveling to another country with a pocket dictionary and stuttering through conversations. But, it does get deeper. ]

Say that the person reading the books is able to completely memorize them. Now, when they receive the Chinese input, they search through their mental library to translate and write a response. Again, the native speaker is always convinced. Does the person know Chinese?

The question not only challenges the ideas of artificial intelligence and computer programming, but also what we consider fluency in a language. At what point is a person or computer truly able to “know” a language. Perhaps these questions, like the Chinese room experiment, will stay unanswered, at least for the time being.

Sources:

https://iep.utm.edu/chinese-room-argument/#:~:text=The%20Chinese%20room%20argument%20is%20a%20thought%20experiment,or%20at%20least%20can%20%28or%20someday%20might%29%20think.

--

--