Westworld: Understanding Personhood Through our Questions of Consciousness

Brenden Weber
Mar 28 · 17 min read

In the field of technology, some experts predict that artificial intelligence will have enough computing power to self-correct, and even run programs that are so complex, that they could be comparable to the human brain. With the rising use of technology and social media in everyday lives, the widely debated discussion for the ethics of artificial intelligence needs to begin conceptualizing the ramifications of technology that could demonstrate human-like consciousness. Our understanding of questions surrounding the use of advanced artificial intelligence of the future can provide valuable insight into what it means to be human, and more importantly, what it means to be conscious.

I see pop culture as a universal language that unites people while simultaneously making them think. For this reason, I’ll be utilizing the popular HBO television show Westworld to provide a vantage point of explanation in this essay. Acting as an arena of sorts, the ideas being put forward in the series can be reviewed and challenged with a connective bridge that helps us visualize artificial intelligence to better understand the arguments.

I find it important to clarify that seeing the show Westworld is not necessary to understand the ideas and philosophical concepts I will present in this essay — if you have seen it, the series can act as a helpful visual aid. Beyond the beautiful writing, acting, imagery and directing of the show, Westworld forces the viewer to come to ethical terms with the treatment of artificially intelligent robots that visually look like humans and in most ways, act like them. I’ll refer to these robots as “the hosts” from here on.

Westworld presents a platform for discussing and understanding the question of, what does it mean to be human? To answer this question we must consider our understanding of consciousness in relation to choice. By considering whether or not humans are like the hosts of Westworld, we can take these insights to help better understand ourselves.

The intent of this essay is to argue that we have reason to doubt our own understanding of consciousness, thus the hosts should not have to prove consciousness to an equal level as humans. It then follows that if the hosts appear to have consciousness, that should suffice when garnering a level of personhood attained by human consciousness, especially if the hosts experience suffering (pain) and reflect upon memory. If these conditions hold true then it is morally wrong to harm the hosts.

The acceptance of these conditions provides insight into the moral implications of future technologies that begin demonstrating artificially intelligent technology similar to those in Westworld. Likewise, it provides reasoning to consider the harmful conditions and treatment of animals due to their ability to experience suffering and pain.

What is Consciousness?

Imagine a technologically advanced adventure park filled with robotic hosts that are visually indistinguishable from biological humans — they even act like us by following realistic preprogrammed scripts and rules to interact with guests. This is the main setting of Westworld. The adventure park has replicated the days of the wild west from the physical aspects like the architecture of the buildings in the park to the actions and words of the hosts, who act like regular humans would have during that time period. Through the use of virtual reality, the park acts as a different world for guests to be transported through.

The hosts were created as props to actualize a lifelike replica of the desired time period for human pleasure. The human guests who pay to enter the park are permitted to do whatever they desire to the hosts. They can rape, pillage, kill, torture, and have sex with any hosts they please — without consequence. The guests are under the assumption that the hosts have no real feelings — thus raping, killing, and torturing them is permissible in their minds. They believe the hosts are not actually suffering, but just replicating and acting out real human suffering. This forces the viewer to ask themselves: are the hosts actually suffering? Are the hosts thinking? The ultimate conclusion lies in our understanding and doubts regarding human consciousness pertaining to personhood.

These questions hold significant importance because if the hosts are experiencing suffering (pain, distress, and hardship) then we can safely assert it is morally wrong to inflict this suffering upon them. If the hosts are experiencing human-like consciousness, this possibly constitutes a level of moral implications upon them such as their personhood and rights they possess.

Let’s consider the question, what is consciousness? This might be the most difficult and complex question facing philosophy. To examine this question I’ll provide a language distinction for two important variables — the mind and the brain. The brain refers to our three-pound walnut-shaped organ placed on top of our spinal cord — the organ that routinely fires off one hundred billion neurons. The mind refers to a more invisible mental state produced by the brain such as visual sensations, emotions, memories, thoughts, attitudes, and beliefs.

In order to understand how the brain produces the mind, we must turn to a scientific understanding of the objective study of the brain. This is where we can identify the material mechanisms that produce the mind. We have come to understand consciousness being material in a sense, as an extension of the body. Although science has made significant progress for brain research in understanding the neural correlations between consciousness and the connection with certain functions of the brain, significant progress is still needed.

But what if neuroscience achieves a full understanding of how the brain produces the mind or how the brain creates a state of consciousness? Full achievement of this understanding will not fully explain the concept of human consciousness; neuroscience examines brain dynamics that connect with particular states of consciousness. Essentially, scientific research of the brain is looking to identify brain function (state) x that correlates with the brain doing y. However, when we self-examine the state of consciousness — with personal experience — the personal state of mind is a much more subjective experience; a subjective state that cannot be fully explained, expressed or understood through the objective proclamations of science.

Allow me to dissect this further. Think of our brain as an information processing device; if we are able to understand how information is processed from inputs into output brain functions — we can better understand intelligence, cognition, and perception. At the very least this can help us understand an objective state of consciousness even though it is not enough to fully understand the subjective state of mind.

For clarification, philosophy itself will not provide a definitive answer to the question of consciousness but it can help ask the right questions to bring us to a better understanding.

To express the problem of a purely scientific state of consciousness imagine you have Person A and Person B staring at a red apple. Based on the scientific understanding of the brain we know that neurons send electrical signals to one another which produces a mental event that is the experience of seeing a red apple (I’m oversimplifying this process but the general understanding is all that is necessary for this essay). When you stare at the red apple you have an experience of red within your visual field from one neuron sending another neuron a signal and so on — you can visualize a group of people continually sending messages to one another.

Now, imagine placing a green apple next to the red apple, another neuron process is taking place that produces the experience of green. The brain itself isn’t changing to green — we don’t have a little being up in our heads changing the filter every time a new object of color is placed in front of us. We only have our brain producing states of mind. The point is a brain function produces a mental event of experiencing red and green apples which means we have the mental experience of red and green but red and green are not found in the brain — it is an experience of the mind.

To further express this problem, imagine your friend looking at a red apple. Neurons A, B, and C fire off to produce the mental experience of red for your friend. We could identify this brain mechanism in a brain scan, but we cannot know if she’s experiencing red. Even if she claims to be experiencing the mental image of red, she is only saying that because that’s the mental experience she has learned to identify with red. But, for all we know that brain mechanism of A, B, and C neurons firing off could be producing what we call green. Meaning, we have only identified the brain function for producing the experience of that mental state but we cannot identify what that mental state is.

This explanation of the problem of consciousness is essentially what Philosopher David Chalmers coined in 1995, The Hard Problem of Consciousness. Explaining that even a complete understanding of the physical structures of a creature thinking mechanisms (not limited to humans) leaves open the question of whether or not the said creature is conscious. The Hard Problem demonstrates that even a full physical explanation is incomplete because it is devoid of what it is like to be the subject regarding the subject.

Before I progress further I must acknowledge my underlying assumption in the denial of Descartes substance dualism. Descartes remains the holder of the most famous argument for the soul with his famously credited phrase, ‘I think therefore I am.’ His arguments suggest that the mind and body must be separable entities because of our ability to doubt the existence of our brains but not our minds. He claims that it is not possible to doubt we have a mind because doubting itself is a mental exercise. His arguments boil down to this: body and mind must be separable because you can doubt one but not the other. However, a distinction between the brain and mind does not mean it must be separated. Additionally, merely doubting your body is not a strong enough property for making proclamations for its nonexistence.

Descartes can doubt the existence of his brain and not the mind, but that doesn’t provide evidence of their distinct separation. It is merely describing how he thinks about it. We can also conceptualize our limited understanding of our mind and brains full nature — in that, we do not know all there is to know. But we know enough to know that the dependence between the brain and mind is inescapably interwoven he just might begin fully doubting the objective nature of himself.

Descartes understood the body merely as a vessel for the soul, however, we have a better understanding of the interconnectedness between the mind, brain, and even further we have an understanding of other areas of the body (your gut) to then affect your brain that then affects your mind. The Hard Problem of Consciousness arises because of our understanding of the mind and body to be interconnected. In that, the mind is produced by the brain.

The Hard Problem of Consciousness

The Hard Problem of Consciousness is an interesting theory to analyze in regards to the hosts of Westworld. The consequence of us as humans having a limited ability and capacity to explain this concept can pose interesting consequences in how we treat Westworld. It is necessary to express a broad spectrum of the doubt around the discussion of consciousness as this can later provide moral implications in my essay later on. But first, let us discuss The Hard Problem a bit further.

I have expressed the assumption that modern science has proven Descartes’ understanding of the relationship between the mind and the body false. We do not have an operator in our brain playing life at the seat of the soul. As I have expressed, even science has limitations to explaining human consciousness. For example, the redness of the apple: what is that experience? Enter, qualia.

Philosophers call this internal subjective experience that uses sense components qualia. It is the experience in our mind that the mechanisms of the brain cannot explain being that — in the case of the apple — we do not have red in our brain.

One possible solution is property dualism which merely suggests that the mind is a property of the brain. The difference between substance dualism and property dualism is that property dualists still believe in only one substance: the physical. However, within these physical properties, you have two separate sections to help explain the subjective experience of the mind. You have physical properties, the brain itself firing neurons; you also have emerging properties where the brain’s physical properties firing off produces the mental properties of the mind.

However, the problem with these emergent properties is they do not really explain anything more about why brain neurons firing off produces the experience of red. Emergent properties do not provide any real substance to the discussion — in reality, the phrase results in a more complicated way of saying ‘I don’t know.’

If these emergent properties in substance dualism express anything in particular, they are trying to explain the sensation of subjective experience as merely a mystical unknown — which as philosopher Susan Schneider would argue, is substance dualism in disguise. Thus, property dualism demonstrates that we still have questions about consciousness.

Determining Human-like consciousness

The Turing test is a game created by Alan Turing to determine whether a machine can exhibit human-like intelligence. An example from the show is when one of the main human characters, William, is told to pick a white or black hat before entering the park. After contemplation, William asked the park employee, ‘are you one of them?’ The woman (who is a host) responds ‘if you cannot tell, does it matter?’ The fact William was unable to determine whether or not he was talking with a host is a demonstration of passing the Turing test.

Many great minds have found fault with the Turing test in that it is only simulated stimulus and response. The mind is not a part of the Turing test — just the brain. The American philosopher John Searle reintroduced the mind into the AI question with the Chinese room. Imagine your friendly neighbor Bob, who speaks only English, and works in a room filled with books. Now, slips of paper with Chinese writing on them are slid through a slot in the door, and it is Bob’s job to write responses. He does this by locating the written Chinese characters in one of the books. The book then tells him what to write — Bob writes the characters and slips it back.

The people slipping Bob the paper probably assume the room speaks Chinese but Bob does not actually speak Chinese — he is just using books. Now, apply this to a human host interaction, the Chinese room is saying that the hosts are just good at pretending to be human but they do not know what it is actually like to be human, think like a human, and speak a human language.

However, an objection would arise that the hosts are just reading code like Bob in the Chinese room reading the books. But how is this different from us reading our own internal biological code? Instead of seeing Bob as the mind imitating consciousness, think of Bob as a piece of the puzzle. Bob is interpreting knowledge from books within the room — the Chinese room being the mind. Between the room, Bob, and the books on the Chinese language, you could say Bob successfully knows Chinese. In turn, you can have a conversation with the room as a whole — it works collectively.

The same logic applies to understand the host and human interaction. The human coding of our brains provides an experience of the mind, which works collectively to produce the perception of consciousness. Our understanding of the human-like hosts in Westworld is doing the same brain processing. The point being, the Chinese room, the hosts in Westworld, and the human brain takes in inputs and produces an output. If one objects and claims the hosts of Westworld do not have consciousness just as the Chinese room does not have consciousness — following the standard I have demonstrated — we cannot prove humans have consciousness either. Even though the hosts might be only performing an imitation of what we perceive to be consciousness — we can doubt the same about our own consciousness.

The only difference is the perception of consciousness for both humans and hosts comes down to the sum of our parts — one is bones, neurons, and flesh the other is mechanics and circuitry.

For the sake of this essay, we must remember humans tend to assume we are conscious. We then project this understanding of our own consciousness upon others as a criterion that must be met to be human. Many of us fail to consider how consciousness is actually experienced. You see, we all have different DNA, genes, and environments. We all acknowledge that humans look distinguishable from each other. But for some reason, our minds are left out of the equation. We go through life as though all minds think the same, and perceive the world in the same way. Some of us are better at making logical connections, others have the ear for music, while others have the vision and hands for making art — these involve differences in the mind. I am not saying nature is destiny.

I am saying we need to acknowledge that nature plays a huge role in our individual subjective experiences and abilities, thus it plays a role in our understanding of individual consciousness itself. However, in this debate, we are merely trying to determine if artificial intelligence like the one in Westworld has human-like consciousness to relate to the concept of personhood ascribed to humans. Understanding the differences of our individual human minds helps demonstrate that the differences in the minds of the hosts should not disqualify them from moral protections.

Free Will and Morality

The presence of programming within a hosts brain would not prevent the presence of a mind. The fact that one was made naturally and one artificially is not a necessary condition for becoming a minded person. For example, the interconnectedness of the human body and brain producing the mind is evidence that the configuration of our neurons (simplified expression) is producing our minds. Thus, we can assume that this same ‘programming’ of the hosts based on the neuron mechanism in the human brain can produce a minded being in the body of a host.

In Westworld, one of the main reasons for the mistreatment of the hosts is the perception that they lack self-awareness. The park creators, Arnold Weber and Robert Ford, realized that having conscious hosts that are constantly being raped and killed is not feasible. Not only is this morally harmful to inflict upon the host but unbearable for a conscious host to be constantly experiencing the emotional turmoil and violent immoral acts.

Since we acknowledge that we may not even possess free will, we cannot require the hosts to demonstrate free will in order to be seen as conscious. Therefore, we can’t use free will as a distinction between hosts and humans regarding our discussion of personhood and moral responsibility.

Consider this thought experiment. Imagine we discover a message providing us with illuminating knowledge that we are living in a simulation. Every decision we have ever made, every action, every bit of suffering, every bit of emotion was already predicted by the superduper powerful algorithm. Should we throw out our understanding of morals and start raping, killing, and causing pain to all of those around us? Should we still grant people personhood to have societal structures for rights and how we treat others?

If this were to come true it would prove the libertarian perspective of free will false, it would cause doubt in our understanding of consciousness, and force society to reconsider the proper criteria for what constitutes the rights a person holds. People in society would come to the realization that we should continue to operate as though those around us have rights. If we perceive genuine pain and suffering, we should not cause pain and suffering to others. Additionally, the feeling and perception of our genuine pain and suffering would remain real to our individual selves; thus, we should continue to act based on that assumption.

My intent for this thought experiment is to point to the importance of my explanatory section on consciousness — we might have reason to doubt our consciousness or we at least do not fully comprehend the state of being conscious. If we have reason to doubt our consciousness or even question whether or not we are self-conscious — using it as a necessary condition for determining the level of personhood a person holds is problematic.

To help bring this discussion full circle, consider that a common conception of personhood is that the individual has the cognitive capacity to be self-conscious — aware of harm being put upon them. From our understanding of the hosts we know they essentially have ‘knobs’ that control their intelligence, emotion, pain levels, and how much suffering they experience. We are left to question whether or not these control a genuine feeling of emotion and pain, but considering we do know the hosts have experience of memory — where they can reflect upon a past event. This experience of memory could fall under the category of impersonating a human reflecting upon a memory, although we know after every time a host runs through a full script their memories of that loop are wiped away. The reason the memories are wiped is because the hosts have a genuine capability of reflecting upon their past experiences. This means it becomes much more plausible that those memories cause genuine emotions of happiness, pain, and suffering.

Consider Bernard, a key character in Westworld, who does not doubt that he is a person. Nobody in the world of Westworld doubts his personhood until it’s revealed to the viewer that Bernard is, in fact, a host. He fulfills his employment duties of being the head coder for making sure everything is running smoothly in the hosts, is able to follow through on social responsibilities, and has no problem fulfilling a role in a social system. Plus, as I mentioned previously, a host appears to have genuine feelings of pain, and suffering, and self-reflection.

When determining the moral implications of partaking in causing suffering of the hosts in Westworld, I’ll consider a basic principle: actions causing harm upon self-aware individuals is not morally justified. In the case of a being a host in Westworld, at the very least we have reason to doubt they do not experience harm, thus it is morally wrong to harm them. Why? Because through a combination of their human-like mind, android replication of a human brain, their experience of suffering, and their ability to hold memory demonstrates the moral harm in taking conscious actions in inflicting this harm upon these conscious beings. A dialogue between Robert Ford and Bernard helps conceptualize this further:

Bernard: “Lifelike, but not alive. Pain only exists in the mind. It’s always imagined. So what’s the difference between my pain and yours, between you and me?”

Ford: “The answer always seemed obvious to me. There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can’t define consciousness because consciousness does not exist.”

The point of this dialogue is to convey that within the hosts and humans pain is felt. If you have the sensation of pain, you have pain. Same with consciousness, if you have the sensation of consciousness, you have consciousness. An objection to this is that the hosts could claim to have these sensations even if they did not. Fair point, but humans claim to have the same sensations. This is where Ford’s dialogue provides insight, his point is that the hosts are like us — they have pain like us, they suffer like us, they have memories like us — this secret ingredient or line in the sand we try to draw for consciousness is an illusion. It is a barrier of our own creation.

The breaking down of this barrier raises questions about our treatment of other animals that experience pain and suffering that do not seem to demonstrate consciousness at a human-like level. The idea I have put forward is proceeding on the side of caution when considering our treatment of other creatures who appear to be experiencing pain and suffering. Thus, this discussion of consciousness provides evidence that our treatment of many animal organisms is morally impermissible as well.

In conclusion, I have expressed respected theories regarding reasons to doubt our own conception of consciousness. Thus, the doubt of our own consciousness is a reason to not use it as a necessary condition for personhood. An alternative is using the hosts’ demonstration of pain and suffering (self-reflecting upon memory helps demonstrate their genuine sensation of pain and suffering) as better criteria for personhood. Sufficiently, the hosts have brains which seem to bring a mind into existence, allowing the hosts to experience human-like consciousness — therefore it is morally impermissible to inflict harm upon a host to an equal degree as we see harmful to inflict upon a human.

Bibliography

Chalmers, David J. “The Puzzle of Conscious Experience.” Scientific American 273, no. 6 (1995): 80–86.

Crichton, Michael. “Westworld.” HBO, October, 2016.

Damasio, Antonio R. “How the Brain Creates the Mind.” Scientific American 281, no. 6 (1999):112–17.

“Table of Citations of Descartes Works.” The Philosophical Writings of Descartes: 391–95.

Turing, Alan M. “Computing Machinery and Intelligence.” Parsing the Turing Test, 2009, 23–65.

A Philosopher’s Stone

A place for a discussion of the ideas all around us in society, culture, philosophy, and more.

Brenden Weber

Written by

A writer of ideas, philosophy, politics, and culture. Thoughts/Booklist: https://bit.ly/2rKz8Zn Podcast: https://bit.ly/2GeOcYY Youtube: https://bit.ly/2Z0rAjA

A Philosopher’s Stone

A place for a discussion of the ideas all around us in society, culture, philosophy, and more.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade