The Quest for Manufactured Minds

Can Artificial Intelligence Truly Exist?

Alexander Adam Laurence
Popular (Neuro)Science
5 min readOct 31, 2013

--

Suppose that a library of the universe exists. One that contains an accurate historical account of all matter that ever was. Where would one find any mention of intelligence, reason, or thought? How can these conscious states exist in a universe that is all but material?

Many have pondered on this question for centuries, long before the understanding of matter. If we consider the brain as a causal-machine, could one say that it cannot possibly contain consciousness? Be it artificial or natural, the problem still persists; can machines ever posses the ability to gain true artificial intelligence? According to John Searle, the answer would be a resounding ‘no’. Searle claims that the reason lies within a famous thought-experiment called ‘The Chinese-room Experiment (which was composed by himself).

Imagine there is a person, called ‘person A‘ within a small single doored room who is unfortunately locked in. Suppose that the door also possesses the ability to allow notes to be transferred beneath it. Oddly enough, all incoming notes to person A are only written in ‘Chinese’ (I understand that Chinese is not a language, however bear with me). This is because the person outside (person B) can only speak ‘Chinese’. To make matters worse, person A can only speak English. Luckily, within the room lies a book which contains a set of instructions on how to interpret Chinese to English (and vice versa). Thus, allowing person A to communicate to person B, without having to know any ‘Chinese’ at all. This effectively, would lead person B to falsely believe and conclude that person A is a fluent ‘Chinese’ speaker. However, we know that is not true. This isn't the case where person A has a full understanding of the language, so henceforth we call this, ‘simulated intelligence’. John Searle calls the first position ‘strong AI’ (if person A truly understands Chinese) and the second as ‘weak AI’ (if person A is only simulating an understanding of Chinese). Where an AI is an artificial object that can successfully interpret, compute, and manipulate an input (receiving notes in Chinese) into an output (writing notes back in Chinese).

The raison d’être for this thought experiment is to analyse a room with only a person and a computer. Whereby the book is the computer, and the instructions is the AI program. Searle argues that by definition, ‘strong AI’ can never exist because of this argument for ‘simulated intelligence’. Furthermore, if we expand upon Searle’s thought experiment and understand the room as a casual machine (i.e. a human brain) and the walls, as the laws of physics. Then ergo, from the argument of Searle, consciousness cannot exist either. It’s only the illusion of consciousness that is created, which is so convincing that it even fools itself. This cold-consciousness is a result of the simulated intelligence argument. This is exactly the implications of Searle’s, and his proponent’s, arguments (if we were to expand and extrapolate on The Chinese room Experiment).

But to bring ourselves back to the original idea; I’m sure some readers might have thought at one point, ‘If I ask a robot with AI, what is 1+1, does it really know what is 1+1? or is it merely simulating the understanding of maths?’. And so, ‘it the robot truly intelligent? Does it posses strong AI?

Regarding the implications for consciousness; three centuries ago, the German mathematician Gottfried Leibniz claimed that if we were to observe the human brain, and explore it as if it were a factory of a mill, then we would effectively find nothing that suggests any evidence for a consciousness. Leibniz claimed that whatever we would observe/find would only be physical. It is only the effects of which, creates the perception of consciousness. So to the human brain, any conscious state such as being self aware would be down to physical processes such as the firing of a neural impulse along axons. This idea by Leibniz was the foundation for Searle’s Chinese room Experiment.

However, Alan Turing suggested something of the contrary to Searle. Turing claimed that machines will one day match human understanding, or even exceed it. In 1948, Turing created the ‘paper machine’ that was capable of playing Chess. The ‘paper machine’ is simply a piece of paper with specific steps of instructions for specific situations. The machine was so ‘intelligent’ that a player needn't know how to even play Chess. Two years later, Turing then devised a genius test, ‘The Turing Test’ in which if a computer can successfully convince a human that it is another human in an ‘online’ chat, then it must be considered as ‘intelligent’. However Searle argued that this merely exemplifies what is known as ‘weak AI’ (simulated intelligence) and nothing more.

The problem I have with Searle’s argument is that there can only be two outcomes, where the room (the sum of all its components) and Person A can speak ‘Chinese’, or they cannot. The experiment makes no effort in addressing which one is true, or which one is false. Also another problem would be that Searle largely ignores the gestalt approach in his argument. For Searle claims that if Person A cannot speak ‘Chinese’, then the entire room (sum of its components) also cannot speak ‘Chinese’. To me, this is a logical fallacy. Imagine if I were to inspect a single neurone from my brain, I would not be able to find or observe any conscious states, or any intelligence for that matter. However, when organised with all my other neurones, cells, and everything my body is composed of, we would be able to observe my consciousness, and ‘intelligence’. This holistic view was largely ignored by Searle.

Finally, Searle and his proponents claim that if we were to create a machine in the distant future, that has the ability to account for all neurons, cells, and other bodily components which exactly matches a human. Then this would also be an example of ‘weak AI’, according to Searle’s arguments. In my belief, this is in danger of being ill-judged since cognitive science (and other related fields) isn't far enough to even define cognition, or even understand what constitutes as “intelligence”. However, Searle’s argument is elegant and still stands unrefuted for more than half a century on. I’m sure that Cognitive Science will mostly spend the future attempting to nullify Searle. Weather this will be a wasted effort (if Searle and Leibniz were correct) remains to be known.

--

--