Well, I thought it was interesting: AI and The Chinese Room.

Spencer Gall
5 min readSep 14, 2023

--

Image generated using Nightcafe.studio
Prompt: The Chinese Room Argument.
- Well it’s a Chinese Room I guess, but it almost seems a little offensive.

I think I should preface this article by pointing out that I am no AI expert, and I have no degree in computer science.
Neither am I a philosopher; I have not spent years pondering the nature of consciousness and what it means to be truly intelligent. I am just some guy who happened to remember something that I thought was interesting.

It’s been quite the year so far

2023 has been a year filled with articles written about AI; we have finally reached a tipping point where widely available AI tools are capable of feats that some thought would forever remain beyond the reach of mere machines.
No longer is AI a term seen only in sci-fi; it is an everyday topic of conversation for many people in a wide range of fields, from education to medicine to entertainment. AI can allow us to edit video footage quickly or enhance older material, write short essays (of variable quality), and has opened the way to a variety of new methods for analyzing and manipulating large data sets.

With all of the breathless coverage on AI I have been reading, there has been a quiet, nagging sensation in the back of my mind, some partially forgotten memory that was trying to surface. Try as I might, I could never quite catch hold of the elusive memory, and it has been haunting me ever since.
Finally, the memory surfaced, triggered by, of all things, me reading an article about the beleaguered video game “Vampire: The Masquerade — Bloodlines 2” and the fact that the developer has been switched from “Hardsuit Labs” to “The Chinese Room.”

“The Chinese Room!
There it is; at long last, I can banish this ghost from the back of my mind. The memory that had been hiding from me all this time was a philosophical thought experiment by Philosopher John Searle, a thought experiment about artificial intelligence, consciousness, and intelligence. Stanford’s philosophy department has a good analysis and breakdown of the idea here, should you wish to gain a deeper understanding.

So what is the Chinese Room Argument, and why should I care?

We, as a species, seem to be particularly prone to exaggeration, misunderstanding, and catastrophic panic. One need look no further back than the 2020 toilet paper shortages that many areas experienced; a respiratory virus was spreading worldwide, and people decided that the best response was to panic-buy massive quantities of toilet paper and hoard it or attempt to resell it for profit.
Is it any wonder that there has been so much concern around the new AI tools we have developed? Is it surprising that some people have become convinced that AI is now sentient and only a short jump away from trying to replace us all?

Given our propensity for panic and the fact that AI is capable of some rather fascinating things, I am of the opinion that clear and careful thinking on the matter is more important than ever. In particular, I think that the people who have become convinced of AI’s sentience are fooling themselves and seeing something that isn’t there; while I am not a computer scientist, it seems that what AI is currently capable of doing is still a far cry from actual intelligence, it is impressive, yes, but consciousness is not yet present.

The Chinese Room Argument directly addresses a major factor in how we can so easily fool ourselves into thinking that a computer program is intelligent like we are. Searle’s thought experiment is described thusly on Stanford’s website:

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols, which, unknown to the person in the room, are questions in Chinese (the input). Imagine that by following the instructions in the program, the man in the room is able to pass out Chinese symbols, which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese, but he does not understand a word of Chinese.”

Like the English speaker in the Chinese Room, our computers do not actually understand any of the tasks they are doing; they do not learn the information and internalize it; rather, they follow a set of instructions that allows them to manipulate symbols (words, numbers, etc.) to generate the “correct output” that we are looking for. ChatGPT works by searching the internet for information relevant to whatever you have asked; it examines what humans have said about the topic and then mashes it all together by calculating which words are most statistically likely to go together based on what it has looked at online. It writes an essay one word at a time, not knowing where it is trying to go with the story but instead simply calculating what is most likely to come next.
A lot of people are talking about this feature of modern chat bots.

The obvious difference between a human in the Chinese Room and a computer performing a similar task is that computers have the kind of processing power that would make even a genius blush. Computers are simply fast enough at tasks that they can fool us into thinking that they are far more intelligent and capable than they are.
This is why modern AI may be able to pass the Turing test; they may be able to fool a human into thinking they are another human, but it still does not represent actual intelligence.

AI is not currently capable of having unique, original thoughts; it is not able to solve novel problems or create new solutions to old problems. Modern AI is still entirely constrained by what its creators have done and what we can train it to do. It can only reflect on what we have already accomplished, for better or worse.

Make no mistake: AI is capable of some truly incredible things today, and it is entirely possible that one day, we will successfully build a machine that is just as intelligent, or more so, than humans. What we have today, however, is still a far cry from such a thing. Modern AI is still nothing more than extremely fancy programs that can perform specific tasks. Sometimes, it performs that task very well, other times very poorly, and yet other times, the program tries to convince you to leave your wife, which is certainly odd but hardly a sign of an evil super-intelligence.

Philosophers come up with some really strange and interesting ways of thinking about things. These thought experiments often help us to look at and think about problems in new ways, something that AI cannot yet manage.

I don’t know about you, but I find these kinds of things very interesting.

--

--

Spencer Gall

A Canadian medical graduate looking to educate, tell stories, and figure out his life. Not necessarily in that order.