Hollywood has depicted AI doom for decades. Now it’s time to see if the movies were right

By Dante Estrada and Samuel Valencia

Samuel Valencia
PCC Spotlight
6 min readDec 13, 2023

--

Dante Estrada / Courier Illustration made from Creative Commons and Wikimedia Commons assets

Press play on Disney+ to tune into the latest Marvel show “Secret Invasion” and terrifying depictions of what is supposed to be Samuel L. Jackson, oddly fake-looking people, and other green-based imagery appear. Marvel Studios used AI art tools as opposed to hiring actual artists to turn around an intro credit scene that will surely be remembered for years to come.

Across all forms of media, including television, movies, and most recently, video games, there has been a heavy focus, discussion, and emphasis on not just dystopian futures but how AI has a hand in creating those. Despite a certain sense of gloom in regards to how AI may affect our future, there is also a healthy mix of stories that try to make sense of AI and how it may actually benefit humanity.

One of the earliest examples of a dystopian future caused by AI is a 1927 German expressionist film called “Metropolis.” While this movie lacks any dialogue, its plot is fairly simple to surmise and gives a pretty good idea of what would come in the genre. The film takes place in the nearly modern year of 2025 and involves an AI that takes the form of a humanoid and attempts to take over the world by causing chaos.

While a new concept for the time, this would heavily influence movies indefinitely. It showed a clear sense of fear with a healthy mix of speculation about what the future would become. Truth is, we are closer to that than the filmmakers may have envisioned.

Enter Stanley Kubrick’s seminal work “2001: A Space Odyssey,” which features its sentient AI HAL 9000, which attempts to murder the inhabitants of the spaceship it controls. Kubrick was known as a trendsetter of his time and understood how technology could take over. While the technology was nowhere near as advanced as he was portraying in his time, the movie showed a clear disdain and fear of what would come.

While no murder-oriented robots have taken over yet, there have been considerable improvements to technology and its sentience. In 2019, computer scientists from several universities developed software that could beat Google’s captcha system. These efforts had become so common they titled their paper published on the matter “Yet another text CAPTCHA solver.” They did it by using something called “reinforcement learning,”which gives a computer a chance to earn rewards given by the proctor for completing various tasks in the most optimal way. Although far from the murderous thoughts of a sentient computer, technology has been shown to adapt when given the tools to do so.

Around the same period, James Cameron’s “Terminator” franchise was popular for its titular antagonists, which are highly advanced robots that destroy all human life. This film series consists of Skynet, an artificially controlled system that targets all humans and aims to wipe them all out. Skynet controls an army of robotic soldiers that are portrayed to have replaced all human soldiers with these cold, lethal machines that are almost flawless. Fortunately, it is currently impossible for A.I. to have its own desire to end all of humanity. However, as is commonly known, many police forces around the world have installed robots to do additional police work. The NYPD recently reinstalled the use of a robotic patrol dog to survey the streets even though these robotic dogs are ineffective. Samsung created robot guards that included motion detectors and built machine guns for South Korea that have yet to be controlled on their own. These police robots have been proven only to do half of the work that regular police officers do. Even with the limits that they hold, many officials such as the San Francisco Board of Supervisors have advocated granting police robots the ability to kill if necessary.

As reported by The Washington Post in an article from 2022, San Francisco authorities had approved police robots killing when necessary to save the lives of others. In a tragic situation where police officers were unable to apprehend a shooter, their next solution was strapping an explosive to a robot that would arrive at the shooter and blow up, which killed them. If these police robots are continuously given the ability to kill, then who’s to stop them from harming innocent lives?

Despite being a bit more subtle than other interpretations on the market, 2008’s “WALL-E’’ handles the idea of humanity becoming completely reliant on the technology it created up to the point where it completely overtakes them and dumbs them down. When our own actions led to the destruction of all plant life on Earth, humans jetted off into space and were then pacified by the AI on board their ship.

AI has become something of a crutch in recent years. Although this year has seen a heavy emphasis on AI art and other generated content, it’s hard to say that this is a new problem that just popped up. A Pew Research Center article delved into how AI would seep its way into our creativity and attempt to replace our own efforts.

They used a polling system and found that “63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.” This outlook may come across as fairly optimistic, but considering what they could be seeing as “advanced” or “better off” isn’t necessarily how it may appear to us in the present, the picture becomes much murkier.

In recent years, video games have also dived into the topic of sentient AI and, more specifically, how these computers can feel things. A fairly recent example of this phenomenon is “The Turing Test,” developed by Bulkhead Interactive. This game sees its protagonist, Ava, grapple with the infamous Turing Test against her AI adversary, “T.O.M.,” who is a highly advanced supercomputer AI that tests her similar to Valve’s “Portal” franchise.The Turing Test is a logic test developed by Alan Turing to attempt to reason if a computer can or can not feel human-like emotions.

While a computer “passing” the Turing Test is debatable due to how many different variables there are, there have been a few in the past 10 years that have achieved it. The first would be a computer AI called Eugene Goostman, which was created for an event at the University of Reading in June 2014. Before this event, this same computer would convince 29% of judges at an event two years before that event took place; it would go on to completely pass at the event in 2014. There are also different variations like the Marcus Test, which focuses more heavily on the plot synopsis of TV shows and movies as a way of gauging an AI’s humanity. This test has way more requirements, and understanding these shows requires a much deeper understanding of political and cultural factors as opposed to having a conversation.

While AI is constantly evolving in a certain direction of being potentially possible, there are instances where it is portrayed inaccurately in certain films. The most common and uninspired trope is the lone man creating AI on his own without any other support. “Chappie” and “Ex-Machina” are both guilty examples that feature a lone man who created AI on their own. As indicated by an article from Science about artificial intelligence accuracy in movies, experts pointed out how flawed it is to demonstrate one man who wrote a program for AI, which instead requires an entire team of programmers to develop this AI.

Among other flaws, another common trope is showing a robot turn rogue and become evil on its own. Instances of these are “Avengers: Age of Ultron” and “I, Robot,” where robots in both films were intentionally made to protect humanity but later reprogrammed themselves to fit their own ideal way of “saving humanity.” Ultron and Sonny from these respective films share a similar objective of securing peace for mankind. Despite this, these artificially intelligent robots must follow Isaac Asimov’s three laws of robotics, which include that a robot must never inflict harm on a human being. It is not up to AI robots to determine what is the best resolution to achieving peace worldwide, as it is also nearly impossible for AI to grow to this point of reprogramming itself out of free will.

Perhaps Hollywood has made the future clear and has tried to predict the end of humanity at the hands of the very machines we created. However, it’s more likely that these examples are the exceptions, not the rule. Seeing where the technology is at now, there seems to be little chance that our computers will come to life and begin to control every facet of our lives or evil death robots attempt to wipe out humanity. But who knows? The future is uncertain, after all.

--

--