Ryan Houseman
Science & Technoculture in Film
14 min readDec 19, 2017

--

Artificial Intelligence and the Ethical Human Response

Imagine two scenarios: First, you type an address into your smartphone, but the navigation app can’t locate your desired destination; second, you stop at a gas station to ask for directions, but the attendant behind the counter doesn’t know how to reach the location. Now, imagine that your response to both instances is the same. In the case of the first scenario, you are frustrated that your smartphone gets spotty service outside the city. Frequent updates have made the hardware seem slower with each new kernel iteration. The “good” navigation app isn’t available for your phone because the software company is embroiled in a year’s long contract dispute with your service provider. In a fit of rage, you throw your outdated smartphone into a wall — spider-webbing the screen. In the second imaginary instance, you are frustrated that the attendant seems uninterested and unhelpful. This is the third gas station you have come to for assistance, and each has rendered you equally unsuccessful in your request for directions. The attendant recommends you purchase the same overpriced, outdated map you bought at the last station — even after you explain that your destination is not located on said map. In a fit of rage, you shove the unhelpful attendant into the gas station wall — opening a cut across his forehead.

Both imagined scenarios have important similarities; the primary stimulus, the emotional response, and the resultant action were identical. Yet the consequences for the two instances would be quite different. In the moments after your smartphone has fallen lamely to the floor, there would be no flashing blue lights, no metal clicking around your wrists, and no inquiries into your desire for or access to counsel. As a society, we do not hold ourselves accountable for unethical and immoral actions perpetrated upon inanimate objects.

Imagine finally, for one last scenario, a collection of circuitry. Imagine, just like the inner workings of your smartphone, all the hardware that carries out the functions of the governing software, and the power source that drives the actions. But imagine that all these components are housed within a device that is shaped identically to the gas station attendant. It looks like the attendant, feels like the attendant, and moves like the attendant. It is so identically representative, that the blood that spills from the attendant’s head when it hits the wall looks and smells like your blood; the sound that bursts from the mechanical lips is the same sound you might make when on the receiving end of blunt force trauma. Imagine that the attendant trembles at your feet the same way your dog once did in the face of perceived danger, or calls for help with tears in its eyes, the way true-crime shows reenact our most violent offenses.

What then, do we make of the ethical implications of misdeeds enacted upon inanimate objects; is an immoral action defined by the actor and action, or the acted upon — and how does this line become blurred when AI has reached a level of sophistication that complicates our most basic distinctions between animate and inanimate?

The goal of this essay is to explore the means by which our ethical standards as humans might be impacted by the introduction of complex AI. Through the review of scholarly articles and scientific studies, this essay will address issues of AI form and representation, the potential impact of hardware and software variations, the social and workforce roles in which AI is allowed to engage in, and the ways in which human response to these issues is influential to the ways humans interact with each other. Along the way, references to and examples from fictional interpretations of these topics will be described from the films Ex Machina and Her, and the series Westworld in order to provide a variety of accessible illustrations of the matters discussed.

AI has the potential to take on many forms which can be applied to many tasks. Beginning first by looking broadly at some complications that may arise, we see that each specific physical AI characteristic created and implemented brings about its own unique ethical ramification. For instance, if we consider the AI presented in the series Westworld as an example, we might ask how a robot’s ability to register and react to pain will affect our willingness to inflict pain upon a robot; and if there is a potential desensitization that transfers to our interactions with humans. Are humanoid robots that exist solely for the purpose of sexual gratification an extension of prostitution; and if so, are current laws and moral norms a sufficient framework for the perception and governance of this potential activity? In the film Ex Machina, multiple AI characters are treated as slaves, though many of them exhibit a behavior that successfully mimics or displays a desire for independence and freedom. Are there potential repercussions from the ability to enslave a convincing and complex AI? AI from both these examples have demonstrable memory upon which a personal identity has been established and refined experientially. In both examples, that memory and identity is altered and/or deleted at the whim of the human(s) in power. How do we as humans allow ourselves to interfere with an identity that is born of lived experience, rather than programing? The AI presented in the film Her is disembodied, but has been created with the ability to feel emotionally. The AI can feel love, but lacks the ability for the physical connection it craves as a result of that love. Is it ethical to limit the naturally resultant extensions of abilities and characteristics we choose imbue an AI with, simply because it better serves our needs and concerns? Each variation in form and function elicits a new and nuanced interpretation of our accepted understanding of ethical behavior. To effectively navigate this burgeoning world, we must begin to address the possible ramifications of each individual pitfall, as well as the sum total.

In terms of the data that exists relative to the link between AI form and resulting human response, there have been some significant studies conducted that shed light on the way the physical appearance and structure of AI shapes the way humans perceive and interact with AI. In the fictional examples used in this essay, 3 potential representations of human mimicking AI are presented. From the fully disembodied human consciousness AI presented in Her, to the Humanoid, but physically distinct “Ava” in Ex Machina, to the human-identical “Hosts” of Westworld, each physical representation has implications that can be derived from some of the basic research undertaken concerning human/robot interaction.

As humans, our ability to empathize is often a necessary component in ethical interactions. In their study How Anthropomorphism Affects Empathy Toward Robots, the research team of Laurel D. Riek, Tal-Chen Rabinowitch, Bhismadev Chakrabarti, & Peter Robinson addressed how varying physical AI forms effect the degree to which humans are able to empathize with the AI. The study presented 120 test subjects with 30 second video clips of varyingly shaped AI (from primitive, Roomba shaped robots, to android, to fully humanoid robots) being subjected to cruelty by human actors. The clip was followed by a questionnaire asking how sorry they felt for the AI. The study concluded there was, “Strong support for our hypothesis that people are more empathetic toward human-like robots and less empathetic toward mechanical-looking robots” (Reik, et al, 2)

The importance of the appearance of AI is echoed in Mark Coeckelbergh’s essay Humans, Animals, and Robots: A Phenomenological Approach to Human-Robot Relations. Though here, Coeckelbergh writes, “What matters with regard to how we respond to robots is not what robots are but how they appear to us — regardless of the technical requirements that render this appearance possible and regardless of the ontological status ascribed to the robot” (Coeckelbergh, 4). Coeckelbergh’s exploration of “appearance” extends beyond the notion of mere physical attributes, and examines the ways in which representations can appear to those perceiving them. This type of “appearance” is more than the ways an image simply meets our eyes. “What counts for understanding human-robot relations is not the relation the robot may have to the world, but their appearance to us, humans — that is, our relation to others and the world,” Coeckelbergh writes, “If this is so, then the ‘mood’ [for instance] of the robot (if it could have one at all), is only relevant in so far as it produces a certain appearance, an appearance which does or does not contribute to the development of an alterity relation” (Coeckelbergh, 6). This interpretation of appearance is more about the human’s perception of reality, rather than the physical, empirical makeup of that reality. On page 5, Coeckelbergh uses the example of Disney’s WALL-E. Even though WALL-E doesn’t resemble the physical appearance of a human, the audience relates to the AI as if its emotions were human, because the audience perceives the display of those emotions in a way that appears human.

These two interpretations of AI appearance certainly complicate the pursuit of proper human/AI ethics. In order to garner the greatest level of empathy from humans, should AI be made to resemble humans, or need it only appear to have human characteristics? Blay Whitby addresses this matter in his essay Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents, writing, “Deliberately avoiding any anthropomorphism in the appearance of a robot will not enable the designer to escape the ethical issues under discussion, if it is obvious that the robot is very human-like in its behaviour. There is good reason to expect that humans will respond to it as if it were a human, albeit a very different looking one” (Whitby, Section 2). If, upon viewing the actions non-humanoid AI, we are able to empathize with that AI due to our commonality of behavior — to a degree on par with that derived in Reik’s empathy study — then this would suggest a significant shift in our current societal code of ethics. Returning to Coeckelbergh’s WALL-E example, if the citizenry empathizes with and relates sufficiently to the little metal robot, what do our transferable ethics say about his observable work load, living conditions, domestic union rights, etc.? Whitby’s assertion that behavior can exist comparably to physical appearance is a compelling philosophical argument — in that, as an extension (or perhaps regression) of that argument, the tension between physical appearance and behavior as primary qualities addresses our own fundamental questions of human unity and otherness. If we can effectively empathize with robots that look like us, and we can effective empathize with robots that behave like us, what line demarcates that which is enough like us to interact with ethically, and what is other-able enough to treat as a mere object?

The answer to this question may exist to some degree in the roles in which AI functions in our society. Is AI a life-assisting OS, as is portrayed in Her? Are AIs our housekeepers, our cooks, our sounding boards, like the robots of Ex Machina? Perhaps AIs are our escape from the mundanity of reality — the walking, talking actors or video game characters upon which we project a fanciful version of ourselves, like hosts of Westworld.

Let’s take a case similar to that of Ex Machina’s servant-bot (for want of a better term), Kyoko; and let’s deal with the specific, hypothetical task of preparing dinner. In our everyday, present world, robots prepare our food on a regular basis (albeit, these “robots” are perhaps an overly simplified example of robots, but there are some fairly complex microwaves on the market). If the only tasks required are the heating and dispersing of food, as humans, we exhibit no uncanny valley response to the shape and appearance of a Cuisinart. When that task is fulfilled by far more complex system, suddenly our expectations and apprehensions become proportionally complex. If a microwave is malfunctioning, simply throwing it away is completely reasonable. If a human cook is “malfunctioning”, the human is sent to a doctor, is provided a reasonable time to recover, and is likely welcomed back after returning to good health. If the malfunctioning entity in charge of heating the evening’s meal is made up of many of the same types of materials that compose the microwave, but has a memory that informs an identity and expresses emotions that humans understand and interact with, what is the ethical course of action? Is it still acceptable to take the malfunctioning AI to a land fill if the entity expresses that it doesn’t want to go. Or, is it acceptable to smack the side of the entity, like you might a finicky microwave, if that entity has haptic feedback sensors that register danger when overloaded by physical stimulus? In summary, at what point does a microwave cease to be perceived as a microwave, and what is our ethical responsibility thereafter?

The effects of complex AI on human ethics are certainly not unilateral, that is to say, the AI with whom humans may likely interact with can have a significant function in shaping that interaction. In their essay Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being, Jason Borenstein and Ron Arkin define the “nudging” process as, “the tactic of subtly modifying behavior… [in which some system would] attempt to shape behavior without resorting to legal or regulatory means” (Borenstein and Arkin, 2). Relative to AI, the idea presented by the authors is that robots would be programmed to influence their human counterparts in ways that elicit an empathetic response consistent with the ethical position the AI is promoting, by way of coaxing or convincing the human to act according to that ethical view. To illustrate nudging principle, Borenstein and Arkin rely heavily on the idea of robot as companion or caregiver stating, “The intention behind companion robots is for them to have sophisticated interactions with their human counterparts over a prolonged stretch of time, potentially functioning as friends or caregivers. Of course, the success of such efforts hinges in large part on whether bonding effectively occurs during human-robot interaction (HRI). And the appropriateness of such interactions is partly contingent on whether it is ethically appropriate to deliberately design devices that can subtly or directly influence human behavior, a design goal which some, including Sparrow (2002), consider to be unethical” (Borenstein and Arkin, 2). The prerequisites for nudging that Borenstein and Arken described here relate back to Reik’s and Coeckelbergh’s findings on how humans relate to and empathize with AI — in that, in order to establish a bond (as Borenstein and Arken refer to it), the human must view the AI in some way as “alike”. From the presented film examples, we see varying illustrations of how this nudging might be implemented.

In the Film Her, Samantha (the AI character) often encourages Theodore (her human companion) to act in ways that will improve his psychological wellbeing. She nudges him to let go of his past by signing his divorce papers; she nudges him to begin a new phase of his life by getting out of the house and dating; she nudges him to better himself by reading new material and conversing with him about subjects outside his comfort zone. In Ex Machina, Ava (AI) nudges Caleb (human companion) to help her escape from her imprisonment. In Westworld, Bernard (AI) often grapples with Dr. Ford (human companion) about the ethical impact of their facility on both the AI employees and the human guests.

Though these examples demonstrate some of the possible positive ethical ramifications of AI behavior manipulation, the potential ability of AI to shape our interactions brings with it a new layer of ethical complexity. Further in his essay Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents, Whitby writes, “One further ethical problem lies in the possibility that robot companions could become so much more well-suited to their owners’ affective tendencies that humans would wish to spend more time with them and less in human society. After all why would one want to engage in the uncertain, risky, and difficult interactions of human society when it is possible to purchase an artificial companion that indulges one’s every foible without complaint or even complains only when you want it to?” (Whitby, Section 6). It is important to consider not only the effects of our actions on the entities upon whom we might enact unethical behavior, but what ramifications those actions may have on ourselves as well. There are a few potential conditions in which this reflexive set of consequences could manifest itself. First, as Whitby suggests, requiring AI to eliminate the possibility of our own bad behavior is problematic. In the above passage, Whitby illustrates the likelihood that humans, when given the option to interact with the messiness of the real world filled with the challenges of real people, or interface solely with machines that cater to every pleasure and deficiency they possess, many people may just choose to eliminate human-to-human interaction.

This also raises important questions about the potential relinquishing of human agency within a society. Societies evolve, progress, and refine themselves on the abrasive field of conflicting discourse. It is our debate, our protest, our contested elections that mold who we are as a society. If we eliminate the need or desire to communicate and participate in communal democracy, then whom is left to chart the course of our civilization? If the companion AI has subsumed the role of confidant, caregiver, and means of discourse, than it is natural to assume that AI has the opportunity to assume agency over the direction of our decision making.

The blurring that may exist between human-human and human-AI interaction also has the potential to upset our existing hierarchy. If AI takes on the role preferred interactor, does this result in humans assuming an inferior role within the paradigm? The stratification that already exists within our social structures has historically allowed minority groups on the bottom of the chain to endure the worst forms of oppression. If AI begins to be seen as part of our chain, in its infancy it is likely to be seen as existing on the bottom of that chain. In this capacity, many problematic behaviors have already been demonstrated throughout this essay (i.e. physical abuse, discarding of unwanted entities, discarding of undesirable identities, etc.). If this new fillable status exists at the bottom of our established hierarchy, and over time the AI ascends the ladder until it becomes the preferred social entity, it is plausible to assume that the most oppressed human group in the given society will fall not to its original lowest status, but the new lowest class originally assumed by the now lionized AI — offering the world a newly justifiable and occupy-able subhuman status.

There can be no doubt that a sophisticated AI brings with it a collection of complications we as a society have not yet begun to prepare for. After roughly 5000 years of modern recorded history, we haven’t even really worked out the kinks of human-human interaction. How can we begin to establish a code of ethics for a nebulous and evolving technology, the details of which we can’t even comprehend yet? Must we understand the nature of entity before we understand our ethical responsibilities toward it? It all seems to bring us back to the original question. Is our communal morality defined by the actor and action, or the acted upon? If our code of ethics is most influenced by the recipient of our actions, then the chaotic nature of the rise of AI will surely cause equal chaos within our understanding of our structural values. If our ethical standards are instead focused on our own actions, divorced from recipient classification and stratification, perhaps we will be able smoothly navigate this fundamental transition in our society.

Works Cited

Briggs, G. & Scheutz, M. Int J of Soc Robotics “How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress “ (2014) 6: 343. DOI — Springer Netherlands. Print ISSN 1875–4791 Online ISSN1875–4805

Borenstein, Jason, and Ron Arkin. “Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.” Science and Engineering Ethics, vol. 22, no. 1, Apr. 2015, pp. 31–46., doi:10.1007/s11948–015–9636–2.

Coeckelbergh, Mark. “Humans, Animals, and Robots: A Phenomenological Approach to Human-Robot Relations.” International Journal of Social Robotics, 2010, DOI: 10.1007/s12369–010–0075–6

Melson, Gail F., et al. “Robots as Dogs?” CHI ’05 Extended Abstracts on Human Factors in Computing Systems — CHI ’05, 2005, doi:10.1145/1056808.1056988.

Pagallo, Ugo. “THE HUMAN MASTER WITH A MODERN SLAVE? SOME REMARKS ON ROBOTICS, ETHICS, AND THE LAW.” Proceedings of ETHICOMP 2010: The Backwards, Forwards and Sideways Changes of ICT, Apr. 2010, pp. 397–404.,

Riek, Laurel D. and Howard, Don, A Code of Ethics for the Human-Robot Interaction Profession (April 4, 2014). Proceedings of We Robot, 2014.

Riek, Laurel D., et al. “How Anthropomorphism Affects Empathy toward Robots.” Semantic Scholar, Association for Computing Machinery, 9 Mar. 2009,

Sullins, John P. “Robots, Love, and Sex: The Ethics of Building a Love Machine.” IEEE Transactions on Affective Computing, vol. 3, no. 4, 2012, pp. 398–409., doi:10.1109/t-affc.2012.31.

Whitby, Blay. “Sometimes It’s Hard to Be a Robot: A Call for Action on the Ethics of Abusing Artificial Agents.” Interacting with Computers, vol. 20, no. 3, 2008, pp. 326–333., doi:10.1016/j.intcom.2008.02.002.

Ex Machina. Dir. Alex Garland. Perf. Alicia Vikander, Domhnall Gleeson, Oscar Isaac. Universal Pictures International, 2014.Film.

Her. Dir. Spike Jones. Perf. Joaquin Phoenix, Amy Adams, Scarlett Johansson. Annapurna Pictures, 2013.Film.

Westworld — Season 1. HBO, Los Angeles. 2016. Television.

--

--