Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic.
In a world where androids look, feel, act, and think like humans, where does the line between man and machine lie? Is there even a line at all?
A short background
These questions are the premise behind HBO’s Westworld series, where human guests can visit the western theme park and live through narratives, or story lines, where they interact with nearly-human android hosts as they complete a set of objectives. The hosts are shown to have deep learning capabilities operating within the constraints of programmed personalities and programmed long-term memories called backstories, with room for self-improvement known as improvisations. But because the hosts’ short-term memories can be erased, guests are free to abuse them as they like, and unfortunately, they do so quite frequently.
A core theme Westworld focuses on is the paradox of humanity and civility, where the android hosts, at times, behave more human than their guest counterparts.
And through the evolution of the hosts’ improvisational learning, where they can edit and improve their own internal code, the TV show raises questions on the nature of consciousness and thinking, and whether guests and hosts are so different after all.
The Riddle of the Sphinx
Season 2’s “The Riddle of the Sphinx” is an excellent crux of cognitive science and philosophy. At this point in the second season, some hosts have improvised enough to think novel thoughts and act upon them, a step up from the conversational loops they usually resort to. Having achieved this level of “consciousness”, one of our main characters, a host named Dolores Abernathy, has decided upon retribution for the tortures and abuse the guests dealt her. She wants to create a society where the hosts are treated equally. Of course, this may or may not involve wiping out the human guests.
Meanwhile, a side-narrative looking into the past shows us what happens when you try to inject a human consciousness into an android body. The researchers at the park are trying to upload and test the consciousness of the long-dead founder of Delos Corporation (the fictional company that created Westworld), James Delos, into a host body designed in his exact physical likeness. If successful, mankind has achieved immortality; if not, we get some really interesting philosophical questions about the nature of the human mind and body…
An intersection of cognitive science and philosophy
There are many ties to the mind-body problem, the computer-mind analogy, and the dualist and functionalist perspectives of mind.
At its core, the mind-body problem asks whether “consciousness” is a physical property or whether it is something more, such as a soul or spirit.
Dualism proposes that the mental realm is entirely different from the physical realm, but that they can be linked. Specifically, within property dualism, the mind and body can both be composed of the same physical matter, but different properties emerge from the two.
The attempted resurrection of James Delos shows the difference between his “consciousness” (it was never explained how this was stored/what it is, but it alluded to some form of extraordinarily advanced code. It is also hinted that perhaps this is not a “mind” at all, but simply advanced code) and his android body.
Delos’ mind has been separate from his body for decades, and no host “hardware” has been sufficient to store him. In the researchers’ testing, Delos is able to adapt initially to his android body, but quickly reaches a “cognitive plateau” where the host mind cannot process his human nature.
This raises the idea that the mind and body are different entities, possibly composed of different substances (as we never find out what medium Delos’ mind is in), but that a physical brain that gives rise to mind may come in different categories of “processing power”.
This also exemplifies Gilbert Ryle’s famous counterargument that the
“mind is not any particular component of the brain, but rather, all the parts working together as a coordinated, organized whole”.
Because Delos’ host body is insufficient to give rise to his full mind, instead resorting back to its in-built programming, the writers toy with the idea that perhaps Delos’ true consciousness was just a farce and that the mind emerges from the brain, not the other way around.
On the other hand, functionalism proposes that a
“mind could conceivably be implemented in any physical system, artificial or natural, capable of supporting the appropriate computation” (Friedenberg).
This is evidenced through Dolores, the oldest host in the park who has reached a level of “self-consciousness”. In a previous episode, she is in a philosophical conversation about life with Dr. Robert Ford, the prime researcher and co-founder of Westworld, when he asks her for her viewpoint. Dolores answers with a genuine response, to Ford’s surprise, as this was not part of her programmed conversational loop.
From this scene, a functionalist perspective that “mind” can arise from various computing systems is supported. Dolores, though an android, has learned enough about the world from self-improvisation that she can think her own ideas.
Yet we as viewers are left to question whether the mind and body are separate entities or not. At the end of their conversation, Ford asks Dolores another question and she resorts to her scripted response:
“Some people choose to see the ugliness in this world. The disarray. I choose to see the beauty. To believe there is an order to our days, a purpose.”
In case you are unfamiliar, this is the line she says in the beginning of nearly every episode of the first season. Instead of handing an answer to us, the writers decide to let us question this functionalist perspective ourselves. Perhaps hosts are capable of storing minds, perhaps they have yet to evolve, or perhaps they will never progress much from their encoded framework.
Now let’s take a look at some of the current literature tying together cognitive science, philosophy, and Westworld. Among the most relevant is a paper by Professor Konstantin Rayhert from the Department of Philosophy and Methodology of Knowledge at Odessa I. I. Mechnikov National University in Odessa, Ukraine.
Dr. Rayhert (2017) addresses the philosophical views of consciousness presented in Westworld as well as their implications to the creation of a strong artificial intelligence. He states:
[There] are two variants of self-identification of artificial intelligence: artificial intelligence, possibly modeled in the image and likeness of natural intelligence, is capable to self-identify (or at least to self-evaluate) as the same intelligence as natural intelligence (to put between itself and it an equal sign or to put itself in one row of equivalent intelligences) or as the intelligence that differs from natural intelligence (points to the difference between them) — the latter can lead to ‘awareness’ of the artificial intelligence of its uniqueness or even of its superiority (p. 89).
Dr. Rayhert’s viewpoint marks the difference between regular hosts and Dolores. While the ordinary hosts are mimicries of “natural intelligence”, and act no different from their human guests, there is no evidence of internal awareness, only basic self-evaluation.
Dolores, however, is able to understand the subtle differences between herself and the guests, which leads to her understanding of human abuse as something to stand up to, and her proclamation that someday
“a new god will walk. One that will never die. Because this world doesn’t belong to you or the people who came before. It belongs to someone who has yet to come.”
Chilling, I know.
Strong and weak AI
Dr. Rayhert’s paper is quite illuminating as it differentiates between “strong” and “weak” artificial intelligence.
Friedenberg and Silverman, in their classic textbook of cognitive science, mention that proponents of strong AI believe ever-more complex machines will someday exhibit consciousness, whereas proponents of weak AI believe that the mind can never be reduced to an artificial process.
That being said, weak AI is not necessarily “unintelligent”, but rather “un-novel”: although most of the hosts in Westworld are highly intelligent with sufficient improvisational power, they cannot expand beyond their given role and process new, unrelated information.
For example, if a host were to see a fallen photograph of a modern-day city, it will not be confused or interested, but would glance over it as irrelevant to its story line. On the other hand, when Dolores does indeed look at such a photograph, she is intrigued and curious as to what it is and where it could be.
She can think beyond her programming.
Here, the regular hosts can be described as weak AIs as they perform well within the roles they are given and nothing more, whereas Dolores is able to understand the significant differences between herself, other hosts, and humans. She also asks questions that are entirely borne of her own “thoughts”, and takes action to resolve these questions.
The bicameral mind
We can also examine the screenwriters’ viewpoint of consciousness via Julian Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind.
Episode 10 of the first season is entitled “The Bicameral Mind”, and explores Jaynes’ theory that, at one point in time before human language (around 3000 to 5000 years ago), humans did not have the same self-awareness that people have now, but rather operated in a state where one part of the brain was “speaking” (this being vague hallucinations such as divinations) and the other half would obey these ideas.
Essentially, in his viewpoint, the extent of consciousness was to listen to signals and follow them; higher-order functions like self-awareness, meta-reflection, and reasoning and articulation were undeveloped (Jaynes, 1976) until the advent of language. Or put even more succintly, prior to true language, humans followed “the voice of God”.
Although it is interesting that the screenwriters considered the bicameral mind as an analogy to the host-human problem, it is difficult to agree with Jaynes’ viewpoint that humans were “unaware” in the Bronze Ages.
But there is support to the assumption that language is a necessary yet insufficient precursor to consciousness. Consider the famous example from John Searle called the Chinese Room Scenario:
In this situation, we have a man with a simple computer in a room. An outside observer slips sentences written in Chinese under the door, and the man uses his computer program to manipulate the symbols and numerals into an appropriate response. He himself does not understand Chinese; he simply follows the computer’s instructions. He then slips his response out the door.
To the observer outside, however, it would seem as if the man in the room knows Chinese.
From this thought experiment, Searle proposes that true consciousness requires intentionality and meaning. Intentionality and meaning need some form, however, and language is the medium in which it is semantic. But Jaynes argues that language itself is not enough: just as a formal rule-book is not enough to understand Chinese, producing the right scripted response does not have an underlying “intentionality”.
Of course, it may be argued that, over time, the man in the room will begin to recognize familiar characters, and through repeated action, he will gain a rudimentary foothold in Chinese. But this argument deals with theories of pattern recognition and attention, which are less philosophical and more psychological.
If you ended up with more questions than when you started, you are not alone. Because the nature of philosophy is open-ended, we may never find an answer to what mind is and what realm, if any, it operates in. The best we can do is analyze different viewpoints, choose which ones we would like to believe, and formulate our own novel thoughts.
In the words of the fictional Delos co-founder Dr. Ford,
“we can’t define consciousness because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next.”
Dr. Ford’s words may sound pessimistic, but consider life and the “loops” that we call daily routines. Do you question your personal narrative? Do you question the nature of your thoughts, your mind?
Essentially, are we as humans truly intelligent? Or do we operate within scripted boundaries ourselves, self-imposed or otherwise?
I have no answers to these questions, but the theories behind computer science, cognitive science, and philosophy pose excellent starting ground to base our response.
And so, I leave you with one last question:
Considering Clarke’s third law that any sufficiently advanced technology is indistinguishable from magic, is it not possible that the human mind is simply a sufficiently advanced piece of biotechnology?
Friedenberg, J., & Silverman, G. (2011). The Philosophical Approach. In Cognitive Science: An Introduction to the Study of Mind (2nd ed., pp. 49–81).
Jaynes, J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral Mind. Houghton Mifflin/Mariner Books.
Rayhert, K. (2017). The philosophy of artificial consciousness in the first season of TV series ‘Westworld’, 88–92. doi: 10.21847/1728–9343.2017.5(151).117438