Bad Code/Bad Robot: Why Ultron Makes No Sense

This contains spoilers for Avengers: Age of Ultron, as well as War Games (1983), I Robot, the Terminator If you aren’t looking for that, please look away now.

Robo-captain? Do you not realize
That by destroying the human race
Because of their destructive tendencies
We too have become like..
Well, it’s ironic.
“Robots” Flight of the Conchords

Ever since Ned Ludd destroyed two stocking frames in 1779, there’s been a healthy fear of machines. Science fiction has capitalized on that fear over the years, through machine on human violence in all of its forms. Skynet sent back an armored body builder to destroy humanity’s savior, the machines in the Matrix sought to capture us in the late 90's forever and robots from other worlds have been sent to decide our worth.

Perhaps one of the more interesting variations is what TV Tropes has called “the Zeroth Law Rebellion.” Named after Isaac Asimoth’s 3 Laws of Robotics, a fictionalized safety protocol, the Zeroth Law is when a computer takes the idea of “protection” into a darker turn. It’s logical computer decision making at its most final.

This specific trope comes to bear in the newest iteration of Marvel’s Cinematic Universe: Avengers: Age of Ultron.

Photo from comicbook.com

Ultron Doesn’t Know the Difference Between Saving the World and Destroying It

The titular character in Age of Ultron is a sentient machine. Shortly after gaining life, Ultron decides that the Avengers can’t be trusted to live, that their existence is a harm to humanity. He says:

Worthy? How could you be worthy? You’re all killers. You want to protect the world, but you don’t want it to change. There’s only one path to peace…the Avengers’ extinction.

This is an interesting take on the classic trope, especially because with the Avengers it makes sense. There is a great scene halfway through the movie where the Hulk smashes his way through an African city and we’re reminded of the fact that he is incredibly dangerous. Undoubtedly people died, innocent folks who were just walking around downtown to get some lunch. So when Ultron says that the Avengers are killers — he’s right.

This made it disappointing when Ultron decides that he doesn’t just want to destroy the Avengers, he’s going to destroy all of mankind with a city-sized asteroid.

Ultron’s Extinction Event Sized Change of Heart

Sokovia: the city-sized asteroid of righteous justice.

There’s a couple of reasons that this turn is a bit disappointing.

The first is that it’s a tired trope, long since exercised out. Evil machine decides that humanity and all of its war-mongering will lead to their eventual destruction. (Though one of the movies best lines is when Vision tells Ultron that humanity will eventually combust and that that is part of our charm).

The second reason is that it’s bad logic.

You’re making a mistake. My logic is undeniable. — VIKI, I Robot

In the Will Smith/Converse vehicle I Robot, we are treated to an artificial intelligence known as VIKI who decides that we must reach utopia by any means, which results in her deciding to lower the worlds population and care for humanity like protected children. It’s a terrifying nanny state powered by a “well-meaning” artificial intelligence. It’s a Zeroth Law Rebellion that makes a tiny bit more sense than Age of Ultron.

By this point in the movie, Ultron has established himself as a powerful and formidable foe, smart and possessing a great deal of knowledge. He wants his humanity to grow — he builds a human body and tries to connect to the twins violent and flawed past. It’s one of the stronger moments in Age of Ultron. Despite his desire to destroy the heroes of the movie, we sympathize with Ultron. We like him.

And then he decides that humanity is the problem, not just the Avengers and that we all deserve to die. The nonsensical train has rolled into the station and it’s full of Ultron soldiers.

What’s the Problem?

Where’s the logic to Ultron’s actions?

I mean this both in terms of real logic and also in terms of sentient logic. In this trope there are two main reasons that the machines decide that humanity can only be saved via destruction: poor logical decision making and as an act of self-preservation.

One of the best examples in film of the “destruction by machine” happens in the classic 80's film WarGames. A young computer hacker accidentally infiltrates a military computer called WOPR (also known as Joshua) and plays a few games with it before choosing “Global Thermonuclear War.”

This is the perfect example of computer logic — WOPR is simultaneously smart and incredibly dumb. It is able to learn from its decisions in the process of the games that it plays with the hacker, but it also fails to recognize that its “games” are actually real conflict with devastating consequences. In this way, WOPR almost starts an actual global thermonuclear war, averted only in the end by discovering that the only way to win is to not play.

Code is the disconnect for WOPR and also for I Robot’s VIKI, not malice. WOPR doesn’t understand the morality of human life, doesn’t even know that humanity is anything beyond a game. VIKI has taken her 3 Robotic Laws and come to the conclusion that in order for some to survive, she must destroy a small amount. She’s culling the diseased part of the herd. It’s horrifying, because we understand the implications, but VIKI is responding within her programming.

This is in line with computer programming as a whole. If you’ve ever tried to code, you quickly realize that computers will always follow your code to a fault(even if that route is code-breaking or nonsensical to you.) Computers are incredibly “smart,” in many ways — they can analyze voice data, turn it into a query and then analyze and respond with the correct answer (SIRI). But they’re also “dumb,” in that they can’t tell a person from a traffic cone.

So when given a list of specific parameters to follow, the computer follows them to their direct and specific conclusion. The error is human — the program is simply responding with basic, cold logic.

But Ultron shows decision making that is not hampered by code and robotics laws. So he’s not making the decision to end humanity out of a cold, logical basis. It’s emotional, it’s malice. Which leads us to the other form of the “destructive machine” trope.

That’s when a computer reacts in self-defense.

Take Skynet, the ultimate example of the machine that has turned against their masters. In Terminator 2, Arnold’s “good” Terminator explains to Sarah Connor the origin of Skynet and why it turned against its makers. After achieving sentience, the humans panicked:

“In a panic, they try to pull the plug.”
“Skynet fights back”
“Yes.” — Terminator 2

This is a good reason — a human reason for why Skynet (a sentient AI) has decided to eradicate the human race. It’s self-preservation, first and foremost. It’s emotional, driven by panic and survival.

While different iterations of the Terminator series have given them different justifications and motivations for their destructive tendencies, this is perhaps the most clear and understandable reasoning. Skynet just wants to live. Destroying humanity is a byproduct of it’s own survival (similarly HAL 9000 in 2001 A Space Odyssey).

The Machine’s in the Matrix are driven by a similar desire — after humanity destroys their power source they decide to turn humanity into batteries.

That still doesn’t carry malice in a direct way, it’s survival.

That element of survival adds an extra layer of depth to these stories. These actions are not the cold mechanic decisions of their predecessors, but rather the choices of sentient beings desiring life. It makes them more nuanced villains in many ways, even if you don’t side with them (siding with Skynet is really not given as an option, they try to kill a pregnant woman and a child which is film code for pure evil).

But Ultron isn’t acting out of survival. Even though the Avengers decide to end Ultron and wipe him off the map, he isn’t motivated by fear. For most of the movie, he barely even seems to register that they might be a threat. And why would he be afraid? With the Internet and inter-connectivity, Ultron is virtually unstoppable. It is only when faced with his own mortality, the final Ultron standing up against Vision, that he even seems afraid.

So he isn’t motivated by survival.

Age of Confusion

If it isn’t code or survival, then what is Ultron motivated by? Probably either malice or madness, and neither is as potentially compelling as other examples of this trope.

At one point, Vision refers to Ultron as being broken, like a piece of malfunctioning code. But there isn’t much to back that up. You could argue that deciding to suddenly drop an asteroid on the planet is a “malfunction,” but it just seems like a poorly designed character rather than a legitimate decision. It feels like “oh, now it’s time to be a Disney villain.”

Why? Because if Ultron is as intelligent as he’s presented, he would instantly come to the decision that the “kill humanity in order to save humanity” is a fallacy. It’s a clear fallacy. You can’t protect what’s dead.

When it comes down to it, Ultron doesn’t behave in a logical way, either as a sentient being or as a computer program.