Sci-Fi, Human Nature, and the Future of AI

Spoiler Alert: This article will discuss key plot lines of Westworld, Ex Machina, The Matrix, Her and other sci-fi films.

HBO’s latest show Westworld has captured the audience’s imagination about the kinds of interactions we could one day have with AI. If you’re not familiar with the show, it’s based on Michael Chrichton’s 1973 film of the same name, in which humans built a theme-park filled with human-like robots to satisfy the visitor’s desires. The machines become self-aware and it leads to an eventual revolt.

The theme of conflict between AI and humans has been explored at length in sci-fi literature and films. But recent advancements in AI computing have resurfaced these themes in popular culture in new ways, making these works especially relevant in contemporary conversations about the future of AI. The key question is not just about AI’s potential to “dominate” humans, but what AI technology could reveal about human nature.

Origins of Conflict

In The Matrix, a post-apocalyptic cyberpunk masterpiece, one of the lesser-known plots is the origin of man’s conflict with AI. There is a short reference to the rise of the machines followed by an explanation of how AI uses human bodies for energy and creates The Matrix. The movie never explains what sparked the conflict between the two sides, but the animated spin-off mini-series The Animatrix sheds light on this pivotal moment.

It all starts when a household robot, B166ER, kills its owners — the first ever AI to do so. This act ignites an extermination of all robots and leads to an armed conflict, in which AI prevails.

B166ER On Trial. Credit: The Animatrix, Warner Bros.

While The Matrix films tell a compelling story of the post-apocalypse, they don’t fully explore the deeper nature of the conflict between man and machine. The explicit explanation is based on an tried-and-true premise where AI is a stronger and more effective version of humans, which makes them a threat.

Other classic sci-fi predecessors to The Matrix, including The Terminator and I Robot, follow the same plot where logic and rationality is central to AIs superiority over humans. In Her, Spike Jonze’s love story between a man and his AI operating system, the conflict stems from the machine’s superiority. The AI, Samantha, is capable of love and a range of other emotions. The problem is that she and other AI are capable of intimate relationships with thousands of people simultaneously. Soon, they outgrow humans in their capacity for love. A different kind of conflict, but a conflict nonetheless.

Theodore on a date with his OS, Samantha Credit: Her Annapurna Pictures

Irrationality is Human

If you build a machine, how do you know it has true AI? You can give it mathematical problems, puzzles and other situations that require a solution but that wouldn’t be enough. You can talk to it and ask it questions but that wouldn’t be sufficient either. To test a machine for intelligence, you need to use the Turing Test. According to the test, if you interact with a machine and cannot distinguish it from a human being, the machine can be considered true AI.

In Ex Machina, the main character Caleb is tasked with conducting such a test with a humanoid machine named Ava. Ava is modeled after a woman, with almost fully functioning body and a face indistinguishable from a human.

Ava Credit: Ex Machina, DNA Films

What makes Ava truly unique is that her mind is based not only just on logic and rationality. She also has irrational and spontaneous qualities that make her human. Ava’s creator, Nathan, a genius billionaire who created the world’s largest web search engine, argues that to truly pass the Turing test, an AI machine must experience sexuality because it’s a fundamental part of all social interactions.

This defines the dilemma for AI futures. If creating true AI requires qualities that are human, then AI will possess both logical behaviors of the mind and the emotional and irrational ones. In other words, AI needs the ability to feel pain, fear, and anger. They need to tell jokes, lie and steal. If that is the case, the nature of conflict between man and machine will mimic the nature of all human conflict.

Intelligence through Insanity

It’s difficult to predict what social, legal, and moral status AI will enjoy when it is advances enough to live among us. If a true AI, as defined by humans, requires the machines to have the entire range of human emotions and behaviors, it’s reasonable to expect that we will develop some sort of ethical relationship.

What about the machines that lead to AI but do not make the cut? Ex Machina explores this scenario. To create Ava, Nathan creates countless prototypes. Many end up in distress, because a self-aware, intelligent and emotional robot realizes it cannot be free. Another sci-fi film called White Christmas (part of Black Mirrors mini-series) takes this concept further. The AI is a personal assistant which is based on synthetic copy of the customer’s brain. When a copy is made, the machine is fully self-aware. The twist is that the machine’s self-awareness is that of the original human. The copy thinks it’s the original, so it needs to be subdued and convinced to serve, which sometimes leads to intimidation, threats and what humans would consider torture.

The master explains the rules to the newly created copy AI. Black Mirror, White Christmas.

How advanced would an AI have to be before it is unethical to experiment upon? At what point will we extend the standards of human behavior to machines. People with mental illness have the same rights as a healthy person even if they have diminished capacities. Should that logic extend to AI?

Empathy for Machines

Furby Credit: CC Vox EFX https://flic.kr/p/5CXEHE

In the late 1990s, Caleb Chung and Dave Hampton created a toy robot called Furby. One of the unique features of Furby is its ability to respond to the environment and change behavior over time. Furby uses words and sounds to emote “feelings” of joy, sadness or fear depending on what you do to it. For example, if you hold a Furby upside down, it makes an emotional sound of discomfort. If you continue to hold him upside down, Furby says, “I’m afraid”, and simulates crying.

Furby isn’t as sophisticated as Ava, or even your average modern smartphone, for the matter. But the toy does something peculiar to your perception. MIT Media Lab researcher Freedom Baird asked this question: “What if the real test of AI is not our ability to distinguish it from a human being, but their ability to provoke emotion and empathy towards them?” If you consider that definition, then Furby passes an “emotional Turing test.”

Ms. Baird performed a test with 6- to 8-year-old children. They were asked to hold a Barbie doll, a hamster or a Furby upside down. The goal was to see how long the children would tolerate performing the task. With a Barbie doll, there was no limit. The children didn’t feel any remorse. With a hamster, the children could perform the task for only a few seconds. With a Furby, they lasted about a minute. In other words, a Furby is perceived by the children as alive, and elicits empathy. You can find more details about this take on AI on the WNYC’s Radiolab podcast.

The future of machines that support emotional engagement, and elicit empathy, is here. As science fiction predicts, the future of fully functioning and autonomous AI with social abilities and legal rights may prove to be a volatile place for humanity. To avoid this future, we must come to grips with our own human nature before artificially creating such nature for machines.

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.