Siri, are you there?

This is part 1 of a two-part parallel on the topic of Consciousness. The posts can be read in any order. Click here for part 2.

The question of whether or not a computer can be sentient is a thought provoking one, one which has existed probably since the time computers were invented. But how do you go about arguing that a computer can or cannot be conscious? You can start by deciding whether hardware has the potential to exhibit consciousness.

In 1980, John Searle, an American Philosopher, published “Minds, Brains, and Programs” in Behavioral and Brain Sciences. In his publication, Searle presented the Chinese Room — an argument that served as the backbone to establishing his belief that computers could never develop genuine consciousness.

The Chinese Room:

http://philosophyisawesome.weebly.com/uploads/2/5/0/2/25029532/3222576_orig.jpg

Imagine there exists a computer program which communicates in Chinese. The program can communicate so well, that even a native Chinese speaker would be convinced the computer understands Chinese. Now, imagine John is locked in a room with three things: paper, pencils, and a book containing a human readable version of this program. Located on the walls are a slot for John to receive documents (input) and a slot to return the response (output).

When a Chinese text is pushed through the slot, into the room, John follows the program instructions in the book and creates a response to the text. The text is then outputted through the slot on the other wall. Based on the resulting output, any Chinese speaker would believe that whoever is in the room is a perfect Chinese speaker, when in reality, John has no idea how to speak Chinese and is just following the books instructions.

To summarize, Searle leverages the Chinese Room to draw a parallel between knowing Chinese and being conscious. Using this analogy, he shows why a computer could never be sentient. To Searle’s point, even if the computer seemed sentient, it would only be executing a well written program, with no real awareness of what it is doing, effectively emulating consciousness. Since the time of the publication to now, his argument continues to stir much debate.

4 essential flaws in Searle’s argument:

  1. Consciousness is on a scale
  2. Consciousness requires a development phase
  3. Emergent properties have required components
  4. The Chinese Room analogy misrepresents the problem

Consciousness is on a scale

The concept of consciousness is relatively easy to understand, but very difficult to define. Take animals for example. Your dog is alive, it knows it needs to eat and drink, its actions are semi predictable, and overall anyone could make a good argument as to why the dog is in fact conscious. But to argue that a dog and a human have the same level of consciousness would be foolish, after all my dog is currently laying in its cage waiting for the next chance to poop in the house, while me and you sit here contemplating whether consciousness is a trait applicable to computers. The difference is real, and it is necessary to understand that consciousness exists on a sliding scale.

Consciousness requires a development phase

It is commonly believed that babies are not born conscious, but rather develop it over time. This distinction is very important. To be born with consciousness would mean that consciousness is an emergent property of the brains neuronal structure. This means if a brain was replicated in a lab, without a body, it would naturally be conscious (assuming it was living). We don’t know for fact that this is not the case (since humans are yet to perfectly reproduce a brain) but it would seem very unlikely.

However, we do know that humans are perfectly capable of being conscious even if you remove some of the brains basic inputs like sight or sound. This is because the brain is able to rewire its own structure to better utilize healthy, unused portions of the brain. By doing this the brain can have a better understanding of the remaining inputs accessible to it.

https://askabiologist.asu.edu/sites/default/files/resources/articles/nervous_journey/brain-regions-areas.gif

This exact phenomenon has been seen to exist in blind people. A study showed the occipital lobe, the part of the brain dedicated to vision, adapting to respond to sound input instead of visual input. As a result, the brain has a better understanding of the leftover inputs (in this case touch, sound, taste, and smell) resulting in a heightened sense of awareness and a higher level of consciousness. In the case of Helen Keller, she was born deaf and blind, yet she was able to develop and achieve a high level of consciousness. However, Keller’s consciential ceiling is significantly lower given her almost non-existent awareness of both sound and sight. Continue to take away enough of our senses, and the brain may not develop consciousness at all. This all goes to argue that consciousness is an emergent property of the brain system which is comprised of the brain, inputs, and an elapse of time required for development.

Emergent properties have required components

By definition, an emergent property means the resulting system has a property which neither of the components had individually. If consciousness is truly an emergent property, it cannot exist without the integral components of its system.

In John’s analogy above, he proclaims: regardless of the room’s perfectly output Chinese, he, inside the room, had no true understanding of the language. This is as silly as me stating: my brain cannot speak without my mouth, thus my brain doesn’t really understand how to speak. In reality, the reason humans can speak is because we possess a brain and a mouth. There is not one part solely responsible for the ability to speak. We can infer, speaking is an emergent property of having both a brain and a mouth. Remove either from the system, then the system will no longer function. By ignoring this principle, John was able to misrepresent the problem into a straw man argument.

The Chinese room analogy misrepresents the problem

The analogy is built up where the analogs are as follows:

  • Computer case -> The room
  • CPU which runs the program -> John reading the book
  • Program -> The instruction book used to decipher Chinese

By setting up the analogy this way he is able to ask the simple question “Do I know Chinese?” of which the answer is an obvious no. Using this conclusion he can then assert that it was the logical equivalent to the computer not being sentient.

The problem is that the analogy is not built properly.

The Chinese room, with John and the book inside, is able to “understand” Chinese because he could read the book to decipher the text and output the result through the slot. If John and the book are not both inside the Chinese room, then the room would no longer “understand” Chinese . This means that understanding Chinese is an emergent property of the Chinese room system, and that is where John made his mistake.

By singling himself out and asking “Do I know Chinese?” he separated himself from the system and thus destroyed his argument in the process. What the Chinese room argument proves is not that computers cannot have consciousness, but that there is not going to be one single component of hardware which is solely responsible for the consciousness a computer exhibits. Rather, consciousness will be an emergent property as a result of a collection of integral computer components.

Currently, research is being done into neuromorphic architectures. These new computer chips can mimic brain functionality to a lesser extent. The hope is that these chips will be the harbinger of true non-biological consciousness. Whether this approach will succeed is still unknown, but it continues to be a topic of research among academics in the field of Artificial Intelligence.

All of this being said, the original question still remains whether or not consciousness can exist as a result of non-biological processes.

For now, ill leave that opinion up to you.

“The difference between me and my computer is that I dont have spell check”

This post was inspired by the Google Talks “Consciousness in Artificial Intelligence

Props to my editor Nathan Pilcowitz and Michelle Gooel