Westworld, Emotion, and the Dilemma of Machine Consciousness
In HBO’s hit series, Westworld, android hosts play out carefully constructed scripts in the process of entertaining the amusement park’s wealthy guests. All is repetitive business as usual until one day when the park’s chief architect introduces a new feature into the android hosts’ programming in the ongoing effort to make the experiences they provide more realistic and lifelike. That is when everything changes.
With that new program, the hosts begin to behave increasingly strangely. Almost as if they were waking from a dream. Actually, that’s not so far from the truth, because from that moment through the rest of the season, the hosts are “waking” into consciousness.
In many respects, Westworld is the ideal vehicle for exploring the dilemmas of consciousness. A nearly complete reboot of the 1973 Michael Crichton sci-fi thriller by the same name, Westworld is a technologically miraculous amusement park in which human “guests” interact with lifelike android “hosts.” Though incredibly realistic, from the outset these hosts are considered to lack sufficient consciousness to be thought of as alive. They are mere automatons, running scripts as mechanically as the saloon’s player piano. They exist merely to perform elaborate storylines for the entertainment of jaded, well-to-do guests who seek relief from their own boring, increasingly dehumanizing existences.
However, in order to offer continuing improvements for the park’s guests (among other reasons), the co-architect of this world, Robert Ford (played by the brilliant Anthony Hopkins) introduces a subtle new feature to the hosts: a reverie. These reveries are tiny repeated gestures linked to a programmed memory, a synthesized expression meant to suggest each host has its own emotional history. The reveries are purportedly intended to perfect the illusion that these androids are as human as you or me. In fact, this marks the beginning of their transition to fully conscious, self-aware beings.
This is a critical moment in the series, just as it may be a critical stage awaiting us in the not so distant future: the rise of machine consciousness. Make no mistake about it: despite enormous advances and milestones passed in recent years, machine intelligence in the real world is only just getting started. These intelligences will continue to accelerate in their development for decades, if not centuries. Ultimately, they may far surpass us by nearly every metric, unseating humanity from its long-held perch at the apex of intelligence. But can and will machines ever actually attain consciousness? That truly is the Big Question.
Why do you perceive the world the way you do? What is it that makes you reflect on it from your perspective at all? Is the way you experience each sensation and stimulus the same as everyone else or is it as unique as your own fingerprint?
These are hardly new questions. They have been at the core of philosophical thought from long before Descartes and Locke, possibly sparked by the very origins of consciousness itself.
The mysteries of experience and existence have driven introspective exploration throughout the millennia, manifesting in rituals that are as personal as they are ubiquitous. Perhaps the most universal of these rituals is storytelling. This pervasive drive enables us to explore the major questions of our existence, opening windows onto ourselves unlike any other.
For over a century, we have manifested our obsession with storytelling through increasingly technological means: radio dramas, cinema, television, video games and presumably soon many more. These are today’s mirrors, the media by which we explore our humanity again and again.
In few places has this been so evident as reflected in the mirror of recent science fiction. We have repeatedly turned this looking glass on ourselves to examine the threats and anxieties we see manifest in this age of technological wonder. Growing worries about losing our livelihoods to technology, the increasingly capable machines and software we surround ourselves with, have given us new existential concerns. These technologies continue to grow by leaps and bounds with no evident end in sight. So what happens when even consciousness itself is no longer unique? What happens when the last bastion of supposed human exceptionalism falls?
Of course, it’s easy to take a reductionist view of our own brains and say that, of course machines will one day become conscious. It’s nearly as easy to say there must be something essential, something vital in our own inner workings that will make it impossible to replicate conscious thought, whether that depends on a deity-bestowed soul or some unknown feature of natural neural dynamics. The fact is we simply don’t know yet.
What we do know is that advances will continue to be made and the verisimilitude of these systems will increase. As is both explicitly and implicitly asserted in Westworld, if the object of our attention becomes sufficiently realistic in its emulation of consciousness, we will fill in the gaps to maintain the illusion. The object doesn’t need to be truly conscious for us to confer consciousness on it, though we may do this on a subconscious level. This is an important aspect of our own intelligence. As an evolutionarily acquired efficiency, if something appears to us to have volition and free will, we’ve learned to give it the benefit of the doubt. To our early evolving minds that behavior indicated some level of awareness and we’ve learned we’d better respect that. This wasn’t simply a matter of economy; we did this as a function of survival. Better to attribute these features and anticipate a certain level of threat in order to survive, than not to and potentially be killed and eaten.
So here we have a predilection to act as though something is conscious despite knowledge and experience that tells us the contrary. Cars, boats, Tamagotchis, and Furbies, we easily fall into habits that see these machines as conscious actors, even though we know better. It doesn’t even matter that the technology doesn’t look like us, though that helps as well. As MIT professor Sherry Turkle points out, many of these devices push our “Darwinian buttons.” In other words, because certain features or actions remind us of ourselves, we instinctively turn to certain patterns of behavior because it’s more efficient to do so, seen from an evolutionary standpoint.
This is backed up by observations made by Stanford professors Clifford Nass and Byron Reeves, in their book “The Media Equation.” We also tend to want to interact with much of our technology as if it was a social actor, as if it was another person. This, I maintain, is one reason why we continue to design and develop computer interfaces that are increasingly natural. We want our technologies to interact with us on our own terms, not the other way around. Gesture recognition, touch screens, voice activation — these are all progressing in this direction. Now we are continuing this trend as we enter the era of affective computing — computers and robots that can read, interpret and even influence our emotions. The field is growing rapidly and has been forecast to nearly quintuple in global revenue over the second half of this decade.
In my best-selling new book, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, I explore many of the changes and potential repercussions the development of these emotionally aware technologies hold in store for us. But perhaps none of these is so critical to our future as the potential development of machine consciousness. As it turns out, there are many reasons emotional awareness might be essential in the development of future artificial intelligences, just as it may have been vital in the rise of our own self-awareness and introspection.
This inevitably leads to many questions: Is it actually possible for an artificial intelligence to become conscious? What are the conditions under which this might occur? How would such a consciousness differ from our own? Perhaps most importantly, what would be the fate of humanity in the future should this ever happen?
To begin, what is it that causes consciousness to arise in Westworld’s hosts? Or to be more specific, self-awareness? After all, consciousness remains a very amorphous concept, one that is used interchangeably to describe a broad range of cognitive properties and experiences. Drawing from the framework explored by NYU Professor of Philosophy and Psychology Ned Block, many of Westworld’s hosts already appear to possess two fundamental features of consciousness: access-consciousness and phenomenal-consciousness. (A-consciousness and P-consciousness, respectively.) For the Westworld architects, Robert Ford and Arnold Weber*, A-consciousness would have been relatively straightforward to develop. This is essentially those aspects of our minds that allow us to access and retrieve information about ourselves, often at a subjective level — memories, personal history, essential aspects of identity. It would be a more capable, nuanced implementation of what we already do with computers when retrieving a program state. As these programs become more complex, intelligent and opaque, such reporting will likely take on an increasingly subjective quality.
P-consciousness, on the other hand, should have been much harder for Westworld’s engineers to create. These are the raw experiences we undergo, the units of sensation that philosophers refer to as qualia. How it developed remains a mystery. What allows us to experience the redness of a rose, the trill of a meadowlark, the pliant softness of a kiss? Yes, we have sense organs that perform the initial steps of gathering sensations, but why do we experience them as we do? The challenge of explaining this led to it being called the hard problem of consciousness by NYU Professor of Philosophy David Chalmers.
I postulate it may have a great deal to do with emotion. Our sensory inputs remain little more than biological causation until you add emotion to the equation. Then things begin to get interesting. Experiences begin to resolve into somatic responses elsewhere in the body, sensations realized through the interoceptive senses of our internal organs — colloquially referred to as gut feelings. To be clear, here we’re talking about the basic, essential emotions — perhaps joy, sadness, anger, disgust, fear, and surprise — those that could have preceded self-awareness, introspection and higher meta-cognitive states. (Unlike guilt, shame and embarrassment.) Then, for a number of evolutionarily beneficial reasons, these emotions began linking to some of our other cognitive processes, particularly those dealing with the formation, storage and retrieval of memories. These in turn informed and influenced future emotional responses.
Emotions and the phenomenal-consciousness they might have given rise to would have enriched our ancestors developing consciousnesses and eventually contributed to the minds we have today. Without this, without P-consciousness or emotion, we would be what are known in philosophy as phenomenal-zombies, human-like but non-sentient beings. (Such a theoretical absence of emotion shouldn’t be confused with alexithemia in which sufferers experience an inability to access and describe emotions to varying degrees.)
These were but the preliminary steps on our long journey. As with Ford’s hosts, this may have eventually made possible the development of our own higher forms of self-awareness, what I’ve described elsewhere as introspective-consciousness or I-consciousness (following Block’s nomenclature.) This is conjectured as an emergent property, the product of the interplay between both access and phenomenal-consciousnesses. But then, many animals have differing degrees of A-consciousness and P-consciousness, without developing anything close to the level of metacognitive introspection — thinking about their own thoughts — that we humans experience. What made us different?
Was emotion the key that unlocked this unlikely door? Or more specifically, was a certain interaction between our somatically-linked emotions and the higher cognitive functions of our intellect? Over time, evolution integrated these two independent systems of our brains so they eventually had a degree of access and influence over each other. The result enriched us and gave much greater depth to our experience of the world. It may well have contributed to our own flexibility of thought and decision making as well as our development of theory of mind, the ability to internally model and understand the minds of others. This could have begun with primeval communication modes such as affective empathy and emotional contagion, allowing us to be affected by other people’s emotional states. (Cognitive empathy — intellectually putting yourself in someone else’s shoes — would come much later.) This modeling of theory of mind through emotional communication would then have made possible our increased awareness and delineation of self and other. From this dualist perspective, we could then develop internal narratives, the semiconscious stories we tell ourselves, the dialogs that run almost continually through us, until ultimately, the modern self-aware human mind was born.
It’s fascinating to watch a very similar progression develop throughout the Westworld narrative as it brings up many of the philosophical issues we face in understanding our own minds. Because many aspects of consciousness are entirely subjective, it’s been said that we can’t know with certainty that anyone other than ourselves is conscious. This solipsistic view, sometimes called the problem with other minds, extends to Westworld’s hosts too. It may be that all they are doing is simulating consciousness very well. Though convincing, it’s impossible for the human guests, the architects or even the audience to know the truth with certainty. Are the hosts truly self-aware? This is a problem that will apply to the real development of artificial intelligence for some time to come. Possibly forever.
Westworld’s season finale’s focus on suffering as the driver for the hosts developing self-awareness shorthands the complexity of how the integration of emotion might have contributed to phenomenal consciousness and self-awareness. Nevertheless, this could be on the right track. The show also draws on the work of psychologist Julian Jaynes, even though there remain numerous issues with this forty year old theory of bicameralism, particularly since certain beliefs about right-left brain function have since been debunked. While it makes for a good story, it still remains highly unlikely this supposed integrating of two such supposedly disparate aspects of the mind would of itself yield consciousness, especially in a machine intelligence.
But let’s just say that the day does come when machines are able to attain self-awareness, phenomenal consciousness and all the other aspects of cognition we refer to generally as consciousness. Though this goal may eventually be reached, it will not be through the same means humans use because these machines don’t begin from the same biological basis as ourselves. Just as an airplane doesn’t achieve flight using the methods of a bird, just as a scanner’s text recognition operates entirely differently than a child learning to read, machine consciousness will be generated through very different mechanisms from our own.
Nevertheless, some of the same quandaries will continue to exist. While we can wonder if certain machines are conscious, we may also find machine intelligences pondering the same thing about us. Though these machines may be able to prove other machines are conscious, given their different origins, they may find our own state remains an uncertainty. Perhaps these AIs will even attain new forms of consciousness beyond anything we ourselves experience? Will this make us the lesser species from their perspective? What would be the answer were our positions reversed? What is our answer today?
Despite all of this, the retelling of narratives that cycle back on themselves remain at the heart of Westworld, just as they are at the heart of individual human consciousness and civilization as a whole. The scripts the hosts play out again and again, eventually give rise to richer internal dialogues, just as our own inner monologues may have done for us. In this sense, experts like Jaynes and Daniel Dennett may be on the right track. The telling, retelling, modification and perpetuation of these internal myths and stories could be as essential to the identities and growth of these new artificial minds as they were to our own. Then, just as these stories come to unify the different elements of mind, they may also lead to external stories that ultimately unify individual intelligences into more cohesive groups, establishing the foundations for a brand-new society.
Will humans have a place in such a new world order? In the case of Westworld, we should know in another few seasons. As for the real world, we’ll probably need a little more time to discover what our place is going to be, as we venture into this brave new future.
* At the time of this writing, Arnold’s surname remains a matter of speculation.
Originally published as a two-part series at Psychology Today on July 11 & 12, 2017.