CAN A MACHINE THINK?

Sofia
Therefore cats
Published in
7 min readFeb 11, 2020

--

Better (not) ask it!

Written by Caio Dallaqua and Sofia Meirelles

By fear or amazement, Artificial Intelligence is a subject that has captured people’s imagination in the 21st century. The applications on our daily basis manifest in route finding, language translations, virtual assistants, autonomous vehicles and significant advances in medicine. Yet, it seems like that the (speculative) fear of machines taking control outshines its multiples applications. So we wonder: is this apprehension justified?

The debate on such issue is long. For Marvin Minsky, one of the central names in the field of Artificial Intelligence (AI), it’s only a matter of time before machines exceed human capabilities. Minsky belongs to the group of researchers who believe in the possibility of (someday) machines being conscious. Generally speaking, this is what we call Strong AI. On the other hand, people such as the philosopher John Searle [1] argue that when a computer performs a task also done by humans, it’s done in a totally different way, without the understanding of what it is doing.

In this view, even though machines perform tasks as well as (or even better than) humans, it does not mean that they are aware of what they are doing.

Nice, but what does it mean to “be aware of what they are doing”? In order to explain this, we will present a thought experiment known as Chinese Room, which is opposed to the Strong AI thesis: a claim that machines are capable of understanding data and responding to it accordingly. Here it’s of fundamental importance to see that this understanding isn’t limited to the mere reproduction of human behavior, but to having mental states like ours, one might say, a consciousness like ours. These mental states can be seen, according to biological naturalism, as beliefs, memories, desires, free will, intentions of actions, and so on.

Despite that this criticism is directed at Strong AI, it also encompasses philosophical views such as functionalism [2], which has the slogan:

software is to hardware as mind is to brain.

The Chinese Room argument goes against the software and hardware analogy to explain the mind-brain relationship. Here’s the argument: Imagine Julie in a room. She knows nothing about chinese language, but she understands english. The room has two openings: one of them receives papers written in chinese (input), and the other delivers papers (output), also in chinese, in response to what was received. Inside the room, Julie has access to a book full of rules, written in english, about what to do with incoming papers, such as: “The paper with an X must be answered with paper containing Y”, where X and Y are chinese messages.

Chen is outside the room and interacts with Julie through the openings, but having no idea who is inside. He puts a message in chinese and receives another as output. He keeps doing that, enjoying the chat. As Julie answers properly, Chen is increasingly likely to think that the person inside the room is a chinese speaker — simply put, Julie is deceiving him, cause it only looks like she knows chinese. Despite correct, her answers are purely formal: she identifies them by formal symbol manipulation (syntax), not by the chinese meaning (semantic content).

authorial drawing by Sofia

So, the above situation says: well-defined rules relating input and output are enough for an outside observer, such as our dear Chen, to think that a chinese speaker is inside the room. Well, computers are very good at “given X, return Y” association rules.

But what if Julie isn’t in the room… instead, there’s a robot!

Then, just as Julie had no idea about what she was doing, so a machine would need no understanding of chinese at all, even if it was perfectly simulating a native speaker.

authorial drawing by Sofia

What can we conclude by now? Remember what we present as Strong AI: it’s possible for a computer to have a genuine understanding of information if it is properly programmed for it. That is to say, the computer would not only know how to reproduce human behavior, but would be aware of what it is doing, being able to understand, besides the syntax (chinese symbols), the semantics (meaning of the forms). However, what we see in the experiment doesn’t meet these conditions, Julie does not understand semantics: she doesn’t know chinese by answering the questions correctly. The analogy puts Julie doing the work of the computer program, the manipulation of chinese symbols, and yet she doesn’t understand chinese. So, the computer doing the exact same task naturally also doesn’t understand, it lacks comprehension, as much as it can handle correctly the formal symbols.

What exactly supports this argument, what are its premisses? It’s correct to say that computer programs are purely syntactic, i.e., they’re defined in a formal structure of instructions on what to do with the data it receives. First, unlike computer programs, the human mind has semantic ability. Second, syntax doesn’t imply semantics. Then, it follows that programs aren’t minds, but how so?

Now think with us, but by yourself: If both Julie and the machine rely on the same set of rules, Chen wouldn’t note the difference, right? Great. Let’s say to Chen that there’s a human being inside the room, whereas, in fact, there’s only a machine. This trick leads Chen to think, based on the chat, that there’s a conscious human inside the room (we’re fooling him!). Nevertheless, as we saw earlier, the machine doesn’t need understanding for doing this task, thereby, it means we’re violating Turing’s test.

For those unfamiliar with the Turing test, it’s a experiment proposed last century by Alan Turing. It stands that if a computer impersonate a human being, behaving indistinguishable, then it must be intelligent. Thus, if a computer, when interrogated by other human beings is not recognized as a machine, but rather as “one of us”, it pass the test and, according to Turing’s criterion, it’s intelligent. For this to be validated, statistical confirmation is needed: it isn’t enough if one or two human interrogate the computer, there must be people enough to lead us to generalization.

Turing’s test has a very strong presupposition: that behaving as human implies having a mind

which is a behaviorist position that confuses simulation and duplication [3], accordingly to the chinese room argument. In our thought experiment we simply have a simulation, Julie just simulates that she has the competence of speaking chinese, but doesn’t duplicate it: she doesn’t currentlyactually have the competence to know Chinese. Then, the argument that a genuine understanding is being generated would be invalidated, i.e., a simulation of understanding does not duplicate or actually create a genuine understanding. Good examples of this occurs when computer simulations are performed. Simulating earthquakes on a computer does not cause a real earthquake, just as (believe it or not) simulating the beginning of the universe doesn’t give rise to a new universe.

Like every argument that requires a minimum of philosophical foundation, we face disagreement, and chinese room hasn’t escaped the rule! As reasonable as the argument may seem, it’s important to keep in mind it is not a fact. Many philosophers and scientists keep working on the subject. This discussion has strengthened and sparked a range of debates about what is consciousness after all, how do we gain knowledge, what is the relationship between mind and body, among other curious issues in a much larger scenario than the inquiry about robots reaching or not a state of consciousness similar to ours.

Despite all, several questions still remain. Is a machine rebellion possible? Or what is waiting for us is a time when we’ll live better lives thanks to AI? Truth be told, we don’t know what future holds in this matter. But one thing already seems rather certain:

“If knowledge can create problems, it is not through ignorance that we can solve them.”

— Isaac Asimov

NOTES

[1] We do not intend to promote his image here. He is accused of sexual harassment and has been stripped of his emeritus status in Berkeley’s Department of Philosophy.

[2] In this view, what counts are inputs and outputs, regardless of whether it’s a biological (human) or silicon (robots) system. Suppose that you and a machine always receive the same input data (which can vary from case to case, but what arrives at you is always the same as what arrives at the machine at that moment). For advocates of this position, if the machine always provides the same output as you, then those machines would have some level of consciousness similar to yours.

[3] This would be an ontological implication in the sense that one thing affects the existence of another, that is: behavior — causal relationships (question-answer) and the functional role (responding appropriately) — affects the existence of mental states of human consciousness.

REFERENCES

Bringsjord, Selmer and Govindarajulu, Naveen Sundar, “Artificial Intelligence”, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Cole, David, “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), forthcoming

Russell, S. & Norvig, P., 2009, Artificial Intelligence: A Modern Approach 3rd edition, Saddle River, NJ: Prentice Hall.

--

--