Chinese Room Argument: A Robot Cannot Feel Pain!

Devansh Mittal
Devansh Mittal
Published in
11 min readOct 5, 2019

Introduction

To build a human-like machine has always been the aim of Artificial Intelligence, to which it has partially succeeded and claims that more perfection will be achieved in future. It is not questioned whether behaviorally or in performance, we can build a machine which can be human like. Problem comes when things like Intentionality/ Feelings/ Emotions/ Understanding/ Meaning come into picture. With the help of behavior one cannot identify whether a machine is feeling emotions, feelings or would understand the meaning of words/statements/symbols it is computing, independent of the fact behaviorally it is showing to do so.

Even if we want to talk about machines having feelings, emotions, understanding, and pain etc. there exist no formal definition of these things, phenomena. Ultimately it becomes difficult to talk about these things in relation to machines and computational models.

In this essay I will try to talk about “intentional” and “feeling related” aspects for machines. I will not pretend to be neutral. I will try to defend the view that at least a computational model based on computation over any kind of representation can never have or realize intentional phenomenon, qualia, feelings, pain etc. Thus not just in practice, in principle too it is impossible to build such machines.

In this paper, I will go further to explain various theories proposed in order to explain how intentional phenomena, subjective experiences, qualia and feeling related aspects are explained in case of human beings. Here I will refer to the “Hard” and “Easy” problems of Consciousness. I will talk about how various efforts of Strong and Weak AI are working to solve the “easy problem” of consciousness and “hard problems” are still untouched.

Computation and Pain

John Searle’s Chinese Room Argument

With the help of Chinese Room Argument it can be shown that computation over any kind of representation is insufficient to realize Intentionality/Feelings/Emotions/Pain etc. Computation over representation is considered to be a promising theory of mind and is sometimes also referred to as “Computational Theory of Mind”. In 1980, John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the Chinese Room Argument.

The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows instructions in English for manipulating Chinese symbols, where a computer “follows” a program written in a programming language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does — manipulate symbols on the basis of their syntax alone — no computer, merely by following a program, comes to genuinely understand Chinese. If the argument with the phenomena of “Understanding” is tough to understand for some then they can take reference of “Pain”. There is no way, the above set-up, with a human being and rule book, to realize “Pain”. If it is not possible to realize subjective experience like “Pain” for the above set-up then it is not possible for any computational model which manipulates representation, to realize any subjective experience. Thus, strong AI is false.

Chinese Room Argument can be pictorially understood in following chart.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

  • If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
  • I could run a program for Chinese without thereby coming to understand Chinese.
  • Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

Chinese Room Argument was mainly given to show that computation over any kind of representation will lack understanding. Same argument can also be used to show that while human in Chinese room is manipulating symbols, there is no possibility of him experiencing any kind of “Understanding” or “Pain” in the task of manipulating symbols or “there is nobody to feel pain” in the system, so there is no pain.

Simple Explanation of “Chinese Room Argument”

Chinese room argument primarily says that any computational model based on representation is “in principle” incapable of producing any human intentional phenomena or subjective first person experiences.

Searle argues to understand the nature of “computation”. He says that a computation is nothing more than a combination of a “Rule Book” and an “Agent” which is required to manipulate the input on the basis of the “Rule Book”. Pictorially, it can be represented as follows. A computation is nothing more than what is shown in following diagram.

After establishing this analogy of computation, Searle asks the question to the reader, where is the possibility of realization of any human intentional phenomena, subjective experiences like pain, qualia, emotions or any kind of sensation in above setup?

Since there is no possibility of realization of any human intentional phenomena or subjective experiences in above setup, Searle argues that computation over representation, cannot “in principle” realize any human intentional phenomena or subjective experiences.

Video Explanation of “Chinese Room Argument”

First Video.
https://www.youtube.com/watch?v=TryOC83PH1g

Second Video
https://www.youtube.com/watch?v=_yJY2POA5E8

Further readings on the same

At this point one may also like read one of my other posts on the same issue, for greater understanding.
Can a robot feel pain? — http://devanshmittal.wordpress.com/2010/02/09/can-a-robot-feel-pain/

One may also like to read the original paper published by John Searle on Chinese Room Argument. Chinese Room Argument. Minds, Brains and Programs by John Searle.

David Chalmers and Hard Problem of Consciousness

When you look at this page, there is a whir of processing: photons strike your retina, electrical signals are passed up your optic nerve and between different areas of your brain, and eventually you might respond with a smile, a perplexed frown or a remark. But there is also a subjective aspect. When you look at the page, you are conscious of it, directly experiencing the images and words as part of your private, mental life. You have vivid impressions of colored flowers and vibrant sky. At the same time, you may be feeling some emotions and forming some thoughts. Together such experiences make up consciousness: the subjective, inner life of the mind.

The Hard Problem

Researchers use the word “consciousness” in many different ways. To clarify the issues, we first have to separate the problems that are often clustered together under the name. For this purpose, I find it useful to distinguish between the “easy problems” and the “hard problem” of consciousness. The easy problems are by no means trivial — they are actually as challenging as most in psychology and biology — but it is with the hard problem that the central mystery lies.

The easy problems of consciousness include the following: How can a human subject discriminate sensory stimuli and react to them appropriately? How does the brain integrate information from many different sources and use this information to control behavior? How is it that subjects can verbalize their internal states? Although all these questions are associated with consciousness, they all concern the objective mechanisms of the cognitive system. Consequently, we have every reason to expect that continued work in cognitive psychology and neuroscience will answer them.

The hard problem, in contrast, is the question of how physical processes in the brain give rise to subjective experience. This puzzle involves the inner aspect of thought and perception: the way things feel for the subject. When we see, for example, we experience visual sensations, such as that of vivid blue. Or think of the ineffable sound of a distant oboe, the agony of an intense pain, the sparkle of happiness or the meditative quality of a moment lost in thought. All are part of what I am calling consciousness. It is these phenomena that pose the real mystery of the mind.

Knowledge Argument

To illustrate the distinction, consider a thought experiment called “The Knowledge Argument” devised by the Australian philosopher Frank Jackson.

According to the knowledge argument, there are facts about consciousness that are not deducible from physical facts. Someone could know all the physical facts, be a perfect reasoner, and still be unable to know all the facts about consciousness on that basis.

Frank Jackson’s canonical version of the argument provides a vivid illustration. On this version, Mary is a neuroscientist who knows everything there is to know about the physical processes relevant to color vision. But Mary has been brought up in a black-and-white room (on an alter-native version, she is colorblind) and has never experienced red. Despite all her knowledge, it seems that there is something very important about color vision that Mary does not know: she does not know what it is like to see red. Even complete physical knowledge and unrestricted powers of deduction do not enable her to know this. Later, if she comes to experience red for the first time, she will learn a new fact of which she was previously ignorant: she will learn what it is like to see red.

Let me try to explain the argument again in different words.

Suppose that Mary, a neuroscientist in the 23rd century, is the world’s leading expert on the brain processes responsible for color vision. But Mary has lived her whole life in a black-and-white room and has never seen any other colors. She knows everything there is to know about physical processes in the brain — its biology, structure and function. This understanding enables her to grasp everything there is to know about the easy problems: how the brain discriminates stimuli, integrates information and produces verbal reports. From her knowledge of color vision, she knows the way color names correspond with wavelengths on the light spectrum. But there is still something crucial about color vision that Mary does not know: what it is like to experience a color such as red. It follows that there are facts about conscious experience that cannot be deduced from physical facts about the functioning of the brain.

Jackson’s version of the argument can be put as follows (here the premises concern Mary’s knowledge when she has not yet experienced red):

(1) Mary knows all the physical facts.

(2) Mary does not know all the fact

— — — — — — — — — — — — — — — -

(3) The physical facts do not exhaust all the facts.

There are following very important implications of “Knowledge Argument”:

  1. Human Subjective Experiences as Phenomena are Not some illusionary phenomena. They are as real as anything else.
  2. Human Subjective Experiences “In Principle” cannot be captured in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail.
  3. Human Subjective Experiences “In Principle” can NOT be reduced in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail. This also implies that all the reductionist explanations of Consciousness are False!

One can put the knowledge argument more generally:

(1) There are truths about consciousness that are not deducible from physical truths.

(2) If there are truths about consciousness that are not deducible from physical truths, then materialism is false.

— — — — — — — — — — — — — — — — —

(3) Materialism is false.

Indeed, nobody knows why these physical processes are accompanied by conscious experience at all. Why is it that when our brains process light of a certain wavelength, we have an experience of deep purple? Why do we have any experience at all? Could not an unconscious automaton have performed the same tasks just as well? These are questions that we would like a theory of consciousness to answer.

One should definitely watch following TED Talk by David Chalmers in order to understand the Hard Problem of Consciousness.

And in order to research further on the topic, following resource by David J Chalmers is a MUST Read. It shows various issues in Mind Problem and concludes how “Hard Problem of Consciousness” is still unsolved and points towards the possibility that probably “Consciousness” may be an ontologically distinct entity.

Consciousness and Its Place in Nature — David J Chalmers

Conclusion

So we see there are certain problems with computational theory of mind, which are,

  1. Problem of Meanings/Semantics: Syntax cannot have Semantics. Chinese Room Argument proves it.
  2. Problem of Intentionality: How can the syntax be ”about” something. Again reference of Chinese Room Argument can be taken in this also.
  3. Problem of Consciousness: As Chalmers says what we can solve from Computational Theory of Mind is the Easy Problem and the Hard Problem still persists.
  4. Human Subjective Experiences as Phenomena are Not some illusionary phenomena. They are as real as anything else.
  5. Human Subjective Experiences “In Principle” cannot be captured in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail.
  6. Human Subjective Experiences “In Principle” can NOT be reduced in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail. This also implies that all the reductionist explanations of Consciousness are False!

At least in the case of a computational model based on computation being performed over a representation, one can see that Intentional and Feeling related aspects are not possible. Chinese Room and other similar arguments show that Intentionality, Qualia, Feeling related aspects are not realizable in a computational model.

After showing the limitation of computational model I talked about various researches which have happened till now in relation to explaining how intentional and feeling related aspects are explained in a human being. I talked about the “easy” and “hard” problems of consciousness. Most efforts in AI (both weak and strong) are trying to solve the “easy problem” of consciousness and “hard problem” as I showed is still untouched or unexplained.

In conclusion I would like to say that, till now there have not been any strong enough researches, arguments, proofs which can prove/show the existence of intentional phenomena or “feeling related” aspects like “pain” in case of machines. Arguments of “computation over representation” have already lost the game; arguments of “structure” (like principle of organizational invariance) are far from being accepted.

References

--

--

Devansh Mittal
Devansh Mittal

Inquisitive. Spiritual. Scientist. Movie Critic. Health Conscious. Physics Lover. Motivator. Teacher. Food Connoisseur. Blogger. Peace Lover.