Machines That Feel

Max Hudson
7 min readDec 8, 2016

--

Qualia Zombies

A woman walked into a bar. She sat down at a stool and talked to the man she sat next to. Throughout the conversation she maintained eye contact and occasionally smiled and nodded. After 10 minutes had passed, she got up and left the bar. She walked to her car and got in.

Consider the possibility that over those 10 minutes, she did not feel anything. She didn’t feel the brass door handle when she walked in, or the wood stool when she sat down. She didn’t smell the onions on the man’s breath. She didn’t feel excited to hear about his day, despite appearing so. She wasn’t aware that she was in a bar, or even alive.

Qualia Zombie — A linguistic construct that is physically and functionally identical to a human, but doesn’t experience sensations like pain or the color red subjectively.

This is an impossible scenario to imagine in detail. If she didn’t feel anything, she felt nothing. Being her would not be like anything you have experienced. It would be like nothing. Your imagination is only capable of projecting pieces of what you know onto a scenario. It cannot synthesize entirely new experiences — only combine or twist prior ones. Imagining a qualia zombie is like imagining nothing exists. Can you conceptualize a reality without space and time.

Furthermore it is highly unlikely this is a functional possibility. Consider somewhat related cases in humans.

There are numerous cases of people who don’t have pain sensation in their hands. They don’t feel anything besides pressure when someone pricks their finger. However, this mental disability is always linked to a physical one. Their nervous system is damaged or their brain doesn’t properly read nervous system signals. Their mental symptoms directly correlate with physical impediments. In other words, cases of people having physically functional hands, nervous systems, and brains, but reporting no related subjective experience, are far and few between.

Additionally consider split-brain patients — patients that have had the fibers connecting the two hemispheres of their brain severed, often to treat epilepsy. People who have had this treatment have trouble verbalizing what they see with their left eye. Information that comes from the left eye goes to the right side of the brain. However, the left, disconnected, side of the brain generally handles linguistics, so they can’t verbalize what they see even though they can react to it in other ways. For example, a split-brain patient can read the word DOG on a screen and draw a dog, even though they can’t say the word dog. The subject still experiences what is on the screen; they just have a limited ability to report that experience.

Examples of missing subjective experiences are consistently linked to missing physical functionality and claims that people can imagine qualia zombies are incoherent. Based on this discussion I’m going to operate on the assumption that a qualia zombie is a non-issue.

If zombies are a conceptual possibility for you, you ought to strongly consider solipsism as your next philosophy. If zombies can exist, they probably do, and you have no way of knowing it.

The Chinese Room

John Searle has made a name for himself by asking the following question in 1980 (not a quote): If an english-speaking man sits in a room, is given cards with Chinese characters on them, and given instructions in english on how to reorder the cards to make a coherent sentence, does the man understand the final sentence? In other words, he’s asking if a purely instruction-based computer can deduce the meaning of, or understand, symbols it doesn’t have an internal translation of. He intends for you to intuit the answer, “No.”

This is a dirty trick. Of course the man does not understand the final Chinese message. Showing that a human in a room cannot translate a small number of symbols spontaneously does not prove a computer cannot understand anything. It is just as bad if not worse than saying if you don’t know calculus after seeing the solution to a calculus problem, you can’t understand anything.

The first problem here is the general scenario. No one who is trying to learn Chinese, does it by being given a bunch of symbols. No one is able to build a productive internal mapping between Chinese symbols and concepts without huge amounts of data or some supervision. We have to be told they’re related or be given a large number of examples of each symbol, and we generally start with the basics and move up from there. A much more useful question to ask is “Can a computer theoretically show understanding similarly to, or as well as a human when they are given the same lesson.”

The second problem here is how we talks about semantics, the meaning behind a word. How well one understands something ought to be measured on a spectrum: 0 to infinity. A relatively high level of understanding simply requires many of connections between the concept trying to be understood and other concepts or information. For example, to understand the semantic meaning of the word (symbol) dog, you need to be aware of images of dogs, and relate dog to concepts of other animals. If you can answer the question “what is a dog?” with “A dog is like these other things (animals) and looks like this: (insert image of dog here),” I grant you the status of understanding dogs, at least to some extent. Obviously associating more information, such as how something works or behaves, is useful for understanding.

The third problem here is the conception of what a computer is capable of and how it works. Granted, Searle proposed his thought experiment in 1980, but it’s still widely accepted as catastrophic for computational theories of mind. Enter machine learning. The analogous computer program from The Chinese Room is not much more complex than a simple calculator. It is not a generalized learning machine. Computers are now capable of understanding the concept of a dog to the extent described above using representations of neurons, called neural nets.

A concept is a relative construct that can be made up of images, words or symbols, sub-concepts, and other classes of information. The more information you have about the construct, the closer you are to understanding it. A computer is capable of both representing and generating a concept graph, therefore it is capable of understanding under the provided explanation.

Take a baby and a computer that is built to learn. Show each of them many images of dogs. Show them the word dog repeatedly. They will both build associations between the two in similar ways. The main difference will be the materials each creature is made of, not how differently the two understand what the symbol dog means.

Machines that feel

My position is simple. Consciousness in computers ought to be held to the same level of scrutiny as other beings in our universe. For now, we’ll discuss this under the assumption that it’s possible to be unconscious and physically functional, but recall my assertion that qualia zombies are unlikely at best.

Our current standard for consciousness is human-like behavior. If it acts like me, it’s conscious. If it acts like me, it ought to be treated like me. We certainly apply this to non-humans animals in principle on average (though rarely in practice). If it can behave as if it can suffer, we should treat it like it can suffer.

Most people are guilty of two fallacies: “If it’s not like me, it’s not conscious,” and, “If it’s like me, it’s conscious.” We are guilty of this for a good reason though. If we were each skeptical of the next person’s consciousness, we wouldn’t be very productive. And we don’t really have a better option since subjective experiences are not externally observable.

There is one metric that is likely to be more reliable than the rest in terms of discriminating between the conscious and unconscious, though: Verbal reporting. If you ask me if I’m in pain, and I say “Yes,” that is a better metric of my conscious state than my behavior (unless I’m lying for some reason). The answer to the question “Are computers conscious?” does lie in the Turing Test despite what Searle would say. The Turing Test is the standard we hold humans to. It is the standard we ought to hold machines to.

The challenge here is building a computer that doesn’t just answer “Yes,” based on deterministic rules when asked “Are you happy?” Rather, it should learn what the concept of happy is from others, and compare itself to that concept, just like we do. It is a monumental challenge, but surely one we are up to the task of if you give computer scientists any credit for their work so far.

Consider the following questions — they aim to illustrate my points about understanding, the true subjectivity of subjective experiences, and consciousness:

  • What comes to mind when you think of the word dog? How about big or small, or heavy?
  • What does seeing the color red feel like to you? What does warmth or coldness, or itchiness feel like to you?
  • What is the difference between looking at your hand and imagining a hand with your eyes closed?
  • Does pain really feel objectively bad to you when you pay close attention to it or is it mostly just an intense sensation and hard not to focus on?
  • When you want dessert, do you choose to want dessert, or do you simply observe you have that desire? When you see there are no Oreos left and you get upset, do you choose to get upset, or do you simply find yourself upset? Are you, your conscious self, able to do anything other than observe an internal state or process?

There will always be some doubt about the consciousness of computers. There will always be some doubt about the consciousness of others. There may even be doubt about the reality of your own consciousness (see Dan Dennett). Doubt is a healthy practice. It leads to discovery. Fortunately for both philosophers and computer scientists, there is much to be doubted and much to be discovered.

--

--

Max Hudson

Software designer and developer, philosopher, poet, artist, musician, writer.