Is conscious AI possible?
A summary of Anil Seth’s TED talk ‘‘Your brain hallucinates your conscious reality”
Anil Seth’s TED talk “Your brain hallucinates your conscious reality” is one of the most interesting TED talks I’ve seen recently. In his talk, Seth concludes that “the prospects for a conscious AI are pretty remote.”
However, I think that what Seth’s argument actually proves is much more profound —I think it follows from his argument that conscious artificial intelligence is not possible at all. More specifically, this conclusion follows from the very definition of consciousness that he proposes and is therefore a logical impossibility.
“Just making AI smarter isn’t going to make them sentient.”
In this post, I attempt to summarise Seth’s argument, which is elegant and beautifully made. Incidentally, I don’t actually agree with all of the premises of his argument (I’ll outline my reasons why in another post but I allude to some of the reasons below), but I do agree with the principles behind his premises and with his conclusion.
The Argument in a Nutshell (tl; dr)
- Premise 1: Consciousness is a form of controlled hallucination;
- Premise 2: Controlled hallucinations (and therefore consciousness) necessarily happen “with and through our living bodies”;
- Premise 3: AI do not have living bodies;
- Conclusion: It is not possible for AI to be conscious.
Let’s delve a little deeper into this argument.
Premise 1: Consciousness is a form of hallucination
How does Seth prove that consciousness is a form of hallucination? It all starts with the definition of consciousness that he proposes in this talk. For Seth, consciousness is necessarily comprises of two parts: (1) consciousness of the world and (2) consciousness of the self.
Consciousness of the world is our experience of the external world. It allows us to navigate the world effectively (find shelter, avoid being eaten by predators) and causally interact in the world (build a fire, hunt prey, etc.). Seth describes this with the beautiful metaphor of consciousness as a kind of “multisensory, panoramic, 3D, fully immersive inner movie.”
As Seth points out, however, the brain can’t actually perceive anything directly; it’s stuck inside of a skull, combining the information it receives via our senses with, (crucially for this argument), our existing beliefs and prior expectations about the world to generate consciousness.
“What we perceive is our brain’s best guess of what’s in the world based on sensory input and our beliefs and expectations.”
He uses the example of the checkerboard illusion to illustrate this.
In normal conditions, A and B in the image should appear to you as different shades of grey. A and B are, in fact, exactly the same shade. As Seth explains, “…what’s happening here is that the brain is using its prior expectations built deeply into the circuits of the visual cortex that a cast shadow dims the appearance of a surface, so that we see B as lighter than it really is.”
Thus we get the concept of the brain as a “prediction engine.”
He then uses a more extreme example of an experiment using virtual reality and an algorithm similar to the deep dream algorithm to show that when these associations and predictions are too strong, a hallucinatory conscious experience is generated.
In the example, the algorithm is forming overly strong associations between patterns in the world and dogs.
He concludes, therefore, that perception is like a hallucination, it’s just a more “controlled one.” I think, in this context, it’s useful to think of all perception as being on a scale of hallucination: ‘normal’ conscious perception and normal levels of association and prediction are on one end and hallucinatory conscious perception and extreme levels of association and prediction are on the other.
Seth then argues that the other fundamental part of consciousness — consciousness of self — is also a form of controlled hallucination.
Consciousness of the self is our awareness of our bodies and of ourselves as things that exist and think — the self-reflexive property of consciousness. As Seth describes it, this is the “…specific experience of being you or being me. The lead character in this inner movie, and probably the aspect of consciousness we all cling to most tightly.” For Seth, there are several distinct aspects to our concept of self-consciousness, including the perspectival aspect (the experience of perceiving the world from a first person perspective), the volitional aspect (the experience of having intentions directed towards the world and being able to causally influence it) and so on. For the purposes of his argument, Seth focuses on the aspect of bodily self, of having and being a body.
Now, to prove that self-consciousness — more specifically, our concept of bodily self — is also a form of controlled hallucination, Seth gives examples that seem to show that the brain generates the concept of bodily self via a kind of prediction, or ‘best guess,’ of what is and what is not a part of our body. One example that illustrates this is the rubber hand illusion.
“In the rubber hand illusion, a person’s real hand is hidden from view, and that fake rubber hand is placed in front of them. Then both hands are simultaneously stroked with a paintbrush while the person stares at the fake hand.”
After a while, participants report a feeling, or sensation, coming from where the rubber hand is located. It’s as if they have assimilated the rubber hand into their bodies by virtue of the fact that the rubber hand is located where the brain expects their hand to be while perceiving an action that would normally generate sensory input for it.
“…the congruence between seeing touch and feeling touch on an object that looks like a hand and is roughly where a hand should be, is enough evidence for the brain to make its best guess that the fake hand is in fact part of the body.”
So, just like our consciousness of the world, Seth has shown that self-consciousness is also generated by our brain via a prediction, or approximation of what is there. In normal cases, we experience sensations as arising from our real bodies. In the case of illusory or hallucinatory perception, our conscious experience is determined by incorrect beliefs and expectations about where our bodies are located and where sensory input is coming from (which leads to the experience that a disembodied rubber hand is part of our body). I think both examples are on the scale of hallucination according to Seth.
We therefore reach premise 1: consciousness (both consciousness of the world and consciousness of the self) is a form of controlled hallucination.
Seth also points to examples of interoception — our brain’s perception of the states of our internal organs — to illustrate that we also perceive the body not just from the outside in, but also from the inside out. I won’t go into the details here, as the conclusion is the same: that consciousness is a kind of “controlled hallucination that [has] been shaped over millions of years of evolution to keep us alive in worlds full of danger and opportunity.”
Premise 2: Controlled hallucinations (and therefore consciousness) necessarily happen “with and through our living bodies”
I think that premise 2 naturally follows from the arguments that Seth has made so far. The arguments above argue for a bi-directional view of consciousness: that consciousness is generated not just from the outside in (from the sensory inputs we receive from the external world and our bodies), but also from the inside out (from our brain’s prior expectations and beliefs about our physical bodies and the world).
Arguing for a bi-directional view of consciousness leads to the view that consciousness (as defined here) is inextricably bound up with being a biological organism in the world. We don’t simply passively ‘sense’ the world, but we actively construct it, with, and as a result of our biological bodies. Or, as Seth puts it, “we predict ourselves into existence. “
“So our most basic experiences of being a self, of being an embodied organism, are deeply grounded in the biological mechanisms that keep us alive. And when we follow this idea all the way through, we can start to see that all of our conscious experiences, since they all depend on the same mechanisms of predictive perception, all stem from this basic drive to stay alive. We experience the world and ourselves with, through and because of our living bodies.” (emphasis mine)
Premise 3: AI do not have living bodies
Premise 3 is unargued for but presumably uncontentious: AI do not have living bodies. Most contemporary AI is locked inside of a machine — AlphaGo does not have a body. Moreover, importantly, no matter how sophisticated real-world robots get, they will never be made of the same organic material as human beings or biological organisms, which are the result of years of evolution. What Seth has argued is that it is our flesh-and-blood and the way that we have evolved that fundamentally determines the nature of our conscious experiences and this simply will never be possible for AI.
Therefore, his conclusion logically follows —
Conclusion: it is not possible for AI to be conscious
As he summarises it:
“…what it means to be me cannot be reduced to or uploaded to a software program running on a robot, however smart or sophisticated. We are biological, flesh-and-blood animals whose conscious experiences are shaped at all levels by the biological mechanisms that keep us alive. Just making computers smarter is not going to make them sentient.”
…or is it?
This conclusion — that conscious AI is not possible — follows logically given the definition of consciousness that Seth proposes in this talk. It does not necessarily follow from other definitions of consciousness. (If you wanted to propose an alternative definition, you would have to prove that Seth’s definition is neither necessary or sufficient, or both.)
Similarly, this argument proves that AI could not be conscious like us, or like other biological organisms. AI could, however, be conscious in a different way. The challenge would be to prove that this ‘other way’ is a valid way of being conscious. The concept of consciousness is so central to our existence that we wouldn’t want to cheapen it so much as to make AI easily meet some criteria for consciousness. Equally though, it would be interesting to explore alternative definitions of consciousness and consider whether AI could meet those criteria.
Whether you agree with it or not, I think Seth’s argument illustrates an important shift in focus in the current AI debate. Whereas previous debates were centred around tests for intelligence, I think that a more relevant litmus test for AI (and what we really all care about) is whether AI could be conscious.
I’ve embedded the full TED talk below.