Why AI Consciousness is Doomed

Lakshay Akula
6 min readAug 24, 2016

--

Go World Champion Lee Sedol makes the first move in a game of Go against AlphaGo, a Google-developed artificial intelligence computer program (Reuters/Google/Yonhap)

Because of the nature of consciousness, we will never know whether machines are conscious, and this has severe ramifications for humanity

Humanity’s Existential Crisis

Intuition guided Lee Sedol as he placed the smooth, black stone on the wooden board. Intuition, honed by a lifetime of practice. Intuition, reserved solely for humans until last spring, when AlphaGo, a Google-developed artificial intelligence computer program (AI), beat Sedol in a series of Go.

Go is deceptively simple. It only involves pebbles and a wooden board, yet, there are more possibilities in Go than atoms in the universe. The world’s best Go players cite a feel for the stones guiding their moves. Many concluded that an AI could not beat a top-ranked player in Go since machines do not have intuition. They were proven wrong. AlphaGo’s moves stunned the world. Often they were not grounded in any specific reasoning. They were unpredictable. They felt intuitive.

A board game led to an existential crisis for humanity. A faceless heap of silicon and algorithms was encroaching on intuition, once the exclusive domain of humans. If computer programs now have intuition, how long until they also have consciousness? When will machines start to think for themselves and need to be treated morally?

We like to believe that consciousness is what sets humanity apart. Consciousness, self-awareness, meta-cognition, all refer to the same cornerstone of being. Descartes captured the essence of consciousness when he stated, “I think, therefore I am.” It seems only a matter of time until a computer can make the same statement.

The Big Problem

“How close are we to creating a conscious AI?” and “Will AI be the end of us?” are popular questions among the media and experts alike. No one is qualified enough to answer these questions with any certainty, even Stephen Hawking. Moreover, there is one important question that is not being asked here — how do we know we know whether an AI is actually conscious? The answer seems obvious. We should just be able to recognize. A conscious AI would learn, show emotions, and interact with us in ways no machine has ever done before. A conscious AI would be a lot like us.

Nope, not even if you are this guy (Flickr CC/Lwp Kommunikáció)

This answer only highlights the problem. We are only certain of our own consciousness, and we evaluate the consciousness of other things by looking for similarities. Such similarities include intelligence, emotions, a capacity to learn and physical traits. This is how we evaluate the consciousness of animals. We are more willing to believe that chimpanzees are conscious than tunas because chimpanzees have much more in common with us. We are more inclined to believe that dolphins are conscious after seeing a video of dolphins using a mirror.

We are most certain that other people are conscious because we share so much in common. But even this is just an opinion. Societies subjugate each other due to racial, religious, or ethnic differences. These immoral actions are rationalized by considering dissimilar people to be less conscious. Once we strip others of their consciousness, even the most unthinkable atrocities become possible.

Stark reminders of the atrocities which occur when people are stripped of their consciousness

The Bigger Problem

It is clear that this method to evaluate consciousness is poor, but this is no cause for concern. Humanity has always been able to improve methodology. Consider medicine. At one point in history, we could only speculate how to treat illnesses. With science, we have a much better understanding of illnesses and how to treat them. Just like it has with illnesses, perhaps human ingenuity can demystify consciousness. With these hopes, researchers are currently devising methods to deduce consciousness, turning it into something which can be measured on a scale or computed by an algorithm. Albeit noble, these efforts are doomed to fail.

The problem lies in the nature of consciousness itself. Consciousness is solely internal. It does not manifest itself externally. If an alien told me it was conscious, I would have no way to validate this. The alien could behave in the exact same way and not be conscious. Even if I knew all the inner workings of the alien’s brain I would not be able to verify whether the alien was conscious.

Unverifiability is a property of all experiences, not just consciousness. Consider pain. Your back hurts so you go to the doctor. The doctor asks you to rate your pain on a scale from zero to ten. You feel some discomfort, but it is not terrible so you say four. The doctor takes an x-ray and reveals to you that you have a fracture. You are surprised. After all, you are not in that much pain. The doctor responds by saying, “trust me, you are in pain.” You deny it. The doctor proceeds to scan your brain, determine your hormone levels and measure your pain tolerance. “You must be in pain,” the doctor now states with conviction, “these active areas in your brain correspond to pain.” You still deny that you are in pain. The doctor cannot verify their statement. The best they can do is choose not to believe you.

The zero-pain face is humanly impossible — no one can be that happy (from Disabled World)

In the same way, consciousness can only be determined by the conscious being itself. There is no way to externally validate consciousness. With no validation, the scientific method breaks down. The earlier problem was that our current, basic method for evaluating consciousness is poor. The bigger problem is that no good method exists.

The Biggest Problem

Even this bigger problem is no cause for alarm. We do not need to know for certain whether our machines are conscious. We just need them to be intelligent enough to help us out.

To see the biggest problem, put yourselves in the shoes of a conscious machine. How can you tell whether the humans around you are conscious? Trick question — you cannot. Worse yet, you might decide to evaluate consciousness based on similarities, and thus find the dissimilar, fleshy humans to lack consciousness.

The ramifications are severe. History has shown how people have committed atrocities on those they stripped of consciousness. Conscious machines would act in the same way, or even worse.

When?

Conscious AI is more nuanced than it appears to be at first sight. For a machine to have consciousness is not a technological or scientific problem at all. It is just a question of belief.

Most people will believe AI is conscious once it is similar enough to us. Important characteristics will include intelligence, emotion and a capacity to learn. Physical appearance will also prove to be an important factor.

Looks conscious to me (still from ‘Ex Machina’)

So when will I believe that an AI is conscious? Perhaps once it exhibits some of the tell-tale symptoms of consciousness. I would be hard-pressed to believe that a computer going through an existential crisis or questioning my own consciousness is not conscious.

But then again, it could just be faking it. As could you, I and everyone else.

Further Reading

Consciousness Creep : An excellent and recent essay by George Musser, contributing editor for Scientific American, on the same topic of AI conscious. Musser explores the interesting question of whether we have already created conscious AI. Musser also describes research in the field of consciousness.

What Is it Like to be a Bat? : A well-known paper by Thomas Nagel, Professor of Philosophy at NYU. This piece supports the idea that no good method for validating consciousness exists. Nagel’s key example is a thought experiment of putting oneself into the mind of a bat.

Our Shared Condition — Consciousness : A TED talk by famed philosopher John Searle arguing for more scientific study of human consciousness.

--

--