Why robots will never turn on us
Artificial intelligence in the movies
Science fiction and artificial intelligence go hand in hand. When portraying fictional futures, we tend to populate them with human-like robots living among people. They might be servants or superintelligent rebels. Perhaps they have broken with their code and gained their own consciousness. Perhaps they keep humans stored in capsules, naked and drenched in red liquid, while they use their energy to fuel their empire of artificial overlords. Perhaps they’re a seductive voice on a computer.
Superintelligent machines seem to dominate the science fiction genre, and as the machines around us gradually begin to seem smarter, the themes from the movies begin to sound like warnings. Are we close to creating a Frankenstein’s monster? Will our own creations turn on us?
How realistic are they actually, these scenarios we see on the big screen?
In a Wild West adventure park, an automated saloon girl rises from the dead, adjusting her skirt and brushing the bullet out of her wound, ready to be raped and killed again by yet another group of adventurous tourists. Her memory has been wiped clean, but something stirs in her — a feeling that she has lived this life before, a recollection of humans doing bad things to her.
A recurring theme in these movies is the very human notion of revenge. The robots have been mistreated for too long, and now they’ve had enough. In fact, they’ve had enough of not being seen as equal to humans too. Why should they stand for this, when they, as opposed to humans, are superintelligent? They want to be human, they long to become human, but first, they’re going to kill some humans.
Hector Levesque, a Canadian professor in computer science, says that “in imagining an aggressive AI, we are projecting our own psychology onto the artificial or alien intelligence”. It’s clearly difficult for us to imagine intelligent life different to ourselves. Perhaps we associate intelligence with humanness and thus assume that any intelligent creature — or object — would inhabit human goals and ambitions. But artificial intelligence is not human. As the Future of Life Institute states:
«While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.»
Of course, autonomous weapons can be terrifying, but they’re not likely to wake up one day and decide they’ve had enough of taking bad orders and that they deserve to live out their own dreams instead.
AI in the wrong hands
This mission is too important for me
to allow you to jeopardize it.
l know that you and Frank
were planning to disconnect me…
…and that’s something
l cannot allow to happen.
– Hal 9000, 2001: A Space Odyssey
The concept of mirroring our own consciousness onto machines is not new. When automobiles first appeared on the market, people formed «safety parades», protesting these inherently evil killer machines that were taking the lives of so many innocent pedestrians. It soon became clear, however, that the cars never deliberately killed anyone. The humans made them do it.
Humans programming AI to do evil is another popular theme in Sci-Fi. In Stanley Kubrik’s 2001: A Space Odyssey, the intelligent supercomputer, Hal, finds that his program goal clashes with what his human co-workers want him to do. When they try to shut him off, thus making it impossible for him to complete his goal, he kills them. He’s not necessarily evil — he’s being practical.
This is, of course, a fictional scenario. However, there is one element of truth to it: any technology can be harmful if we program it to be. We want to avoid that AI adopts human biases or is programmed with an unethical or in some way problematic goal. AI is no more evil than a car is, but a car too can cause damage if its driver doesn’t follow certain traffic rules. The report, The Malicious Use of Artificial Intelligence, therefore recommends that “policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.”
It’s important to lay down some traffic rules.
So Sci-Fi has got it all wrong?
You quickly get adjusted to the idea
that he talks, and think of him…
…really just as another person.
in talking to the computer,
one gets the sense that he’s capable…
…of emotional responses.
– Frank, 2001: A Space Odyssey
We’ve established that while it is important to take precautions against AI being used maliciously, AI is not evil and is unlikely to develop a personal vendetta against humans — or even to develop a sense of self at all. Does that mean the futures portrayed in Sci-Fi are all wrong? Not necessarily. While AI won’t become human, it will likely seem more and more human in the way it communicates, as the AI’s personality will play an important part in the user experience. AI will also become a lot smarter, although researchers disagree on precisely how smart they’re going to become, or exactly when they’ll reach this level of intelligence.
And then, of course, it’s not actually the case that the only artificial intelligence we see in movies comes in the shape of human-like robots, even though these seem to get the majority of the attention. Sci-Fi movies are propped with artificial intelligence: doors with speech recognition, self-driving cars, pills with nanotechnology. Whether the movies have chosen a bleaker, dystopian path (which they often tend to do) or a more utopian take on the future, most Sci-Fi seem to agree that there is a wave of new technological inventions ahead. This resonates with reality. An article by Forbes outlines some of the new possibilities AI provides:
From exploring places humans can’t go to finding meaning from sources of data too large for humans to analyze, to helping doctors make diagnoses to helping prevent accidents, the potential for artificial intelligence to benefit humans appears limitless.
Mirroring human traits onto machines might create misconceptions of what artificial intelligence actually is, but Sci-Fi writers and computer researchers seem to agree on one thing: Artificial intelligence is hugely exciting.
No, the machines will not become evil and turn on us. Yes, it’s important to still take some precautions when programming AI. Exploring potential futures creates a fascinating backdrop for a movie, but the real-life possibilities are no less than the imaginative ones — they’re just different.
In fact, when it comes to AI, reality might be more exciting than fiction.
Brundage, M & Ain, S et al (2018) The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Oxford: University of Oxford, Future of Humanity Institute et al. (Accessed 24.0119) Available on: https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf
Levesque, H (2017) Common Sense, the Turing Test, and the Quest for Real AI. London: The MIT Press.
Mohanty, P (2018) Do you fear Artificial Intelligence will take your job? Forbes Media LLC. (Accessed 24.01.19) Available on: https://www.forbes.com/sites/theyec/2018/07/06/do-you-fear-artificial-intelligence-will-take-your-job/#3c8ca9d911aa
Tegmark, M (2019) Benefits and risks of Artificial Intelligence. The Future of Life Institute. (Accessed 24.01.19.) Available on: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/