Terminator Is a Better Analogy for AI Risk Than You Think

It neatly captures the Orthogonality Thesis

Hein de Haan
How to Build an ASI

--

[Spoilers ahead, but I’ll assume for the rest of this post the reader is familiar with the Terminator movies.]

Photo by Thierry K on Unsplash

At this point in the movie, both John Connor and Katherine Brewster have been saved multiple times by the Terminator.

Katherine Brewster: “Well, how does he die?”

Terminator: “John Connor was terminated on July 4, 2032. I was selected for the emotional attachment he felt towards my model number due to his boyhood experiences. This aided in my infiltration.”

John Connor: “What? What are you saying?!”

Terminator: “I killed you.”

Many intellectuals have scoffed at the idea of using the Terminator as an analogy for actual existential risk from AI. I think this is fair: if a future Artificial Superintelligence (ASI) decides to get rid of humans (either directly or as a side effect), using slow, humanoid robots doesn’t seem like the most efficient method to do so. (A superior intelligence also wouldn’t lose the “fight”.)

Of course, these are movies we’re talking about, and for a good story, humanity needs a chance to fight back against the bad ASI. But I think that especially Terminator 3: Rise

--

--

Hein de Haan
How to Build an ASI

As a science communicator, I approach scientific topics using paradoxes. My journey was made possible by a generous grant from MIRI (intelligence.org).