Let’s talk about evil killer robots.

Just last weekend, billionaire entrepreneur and founder of SpaceX and Tesla Elon Musk had a discussion about the future of technology at the National Governors Association Summer Meeting in Rhode Island.

Particularly striking, but perhaps not surprising or new, was his comment on artificial intelligence, which stopped at no end to become the title and highlight of the interview for many media outlets:

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

I have a problem with this comment. I have never voluntarily written an essay revolving a single comment before, so I have a significant problem with this comment.

First things first before the pitchforks are raised against some “deluded arrogant high school dropout”:

  1. I have an immense amount of respect for the work Elon Musk has done, paving a better and more sustainable future with his work in electric vehicles, renewable energy, galactic exploration, cyborg technology, and yes: artificial intelligence. This post is not an attack on Elon Musk.
  2. As with quite literally anything, widespread adoption of artificial intelligence comes with its share of problems, which I will discuss later. Many of these concerns have legitimate cause for outcry, but the ones you might be thinking of are probably not the most important ones.

And the reason for my post is simple.

We need to stop talking about the “evil killer robots”.

Perusing the numerous comment sections under the news reports of Musk’s interview, I was quite dismayed to see that the general public reaction very well reflected concerns that AI is going to become sentient, that AI has already learned how to learn to program and is building clones of itself, that robots were going to enslave us and kill us, and more echoes of Musk’s concerns that AI will be the downfall of civilization as we know it.

The current state-of-the-art in AI is so far out of line with the likes of Hollywood and Asimov that these concerns are trivial, and even detrimental to progress.

Allow me 5 minutes to convince you of this.

As Stanford University professor and former Baidu Research Chief Scientist Andrew Ng puts it,

“Fearing a rise of killer robots is like worrying about overpopulation on Mars.”
Photo Credit: Renewable Resources Coalition

Is it a legitimate concern? Maybe. Perhaps one day population growth will increase exponentially without halt. Perhaps resources dwindle to a minimum and the existing infrastructure simply cannot support so many people. All of these scenarios are definitely possible, and maybe we need a committee and some funding to find ways to circumvent this issue.

But we haven’t gotten to Mars (yet).

And most similarly, we have not built intelligent agents that are capable of true reasoning, capable of consciousness yet. It is an equally massive slippery slope to suggest that our current machine learning models will suddenly, and without our knowledge, transcend humans, turn evil and then slaughter us all.

To illustrate my point, here’s a 1-minute explanation on artificial neural networks (ANNs), commonly known now as deep learning, the biggest driving force in the rapid advancement in artificial intelligence today. It’s the stuff that builds the backbone for Siri, Alexa, Google Translate, face detection, Google’s recent robot Go champion, how computers learn to paint like Van Gogh, how self driving cars tell apart pedestrians from landmarks from cyclists from vehicles. Nearly everything (this is not an understatement) significant in the latest news about artificial intelligence today likely has some deep learning component in it.

If you’re reading this, you’ve almost certainly encountered deep learning in the past 24 hours — Google, Apple, Amazon, Facebook, Twitter, Snapchat all employ deep learning systems in some way or another.

The operation behind ANNs are simple: you provide an input to the system, which returns an output. The input is something like an audio clip of your voice or a picture of an iconic French building, and your (expected) output is “I like turtles,” displayed as text on your phone screen or a text label reading “Eiffel Tower.”

It is important to understand that the system is a one-trick pony. Give the image classification system audio and you will get back strange, strange results.

How would you teach a young child to recognize an elephant from an apple? You give the child lots of different pictures of elephants, and lots of different pictures of apples. Eventually, he/she can figure one out from the other.

Similarly, you provide enough training data, for example, human-labeled photographs of Tesla Model 3s and Boston cream pies and it will learn to distinguish “cars” from “desserts”. The system assigns numbers, called weights, to each label, and the result of the training is that when the system is given a picture of the Sears Tower the “building” label will have a large weight and the “giraffe” label will have a small weight.

If the weights and the output labels do not correlate as you would expect them to (just as a baby would not know an elephant from an apple without having been taught to do so), with each training example, the system uses fancy mathematics to correct itself. The theory is that with lots of comprehensive data, the system will eventually correct itself to the point of achieving a high level of accuracy over a test set — a data set, similar to the training set in types of content BUT which was not used to train the system, therefore demonstrating the system can generalize to new, foreign inputs as well (and when it doesn’t generalize, it can train on these new inputs to further improve accuracy).

The question is: If we give the system millions of lines of code written by the world’s best software engineers, and lots of labels of what each specific program is supposed to do, couldn’t an ANN learn to write programs too? Couldn’t AI learn to essentially clone itself?

The problem lies in that artificial neural networks “learn” solely to recognize, but there is almost no reasoning involved. Because of the mathematical limitations of what a deep learning system can do, tasks that would require some form of creative thinking process such as programming or breaking cryptography are very difficult. A primary limitation of deep learning is that the system ends up being trained to imitate and it appears to understand, but this understanding is very hollow—the system itself is simply adjusting itself to maximize some human-defined metric of accuracy over the data it has been provided. Instead of learning “why” or “how”, the system learns “what”, which by itself is capable of many tasks once thought to be limited to humans but is still drastically far out of reach of what a computer would need to attain sentience.

As eloquently put by author of xkcd Randall Munroe, the “pile” is “stirred” “until [the answers] look right.”

Photo Credit: xkcd

It would not be inaccurate to say that the best of artificial intelligence we have today is just glorified pattern recognition.

A caveat to being a pattern recognition system is that the system is limited to only some variation of its training data. DeepCoder, a joint collaboration between Microsoft and Cambridge University, demonstrated the ability to solve simple programming problems, but it’s important to note that it does so considering the solution to problems from its training set before searching through a list of all possible solutions to find the correct one — a brute force method essentially equivalent to a human comparing the problem at hand to other problems he/she has seen before, and then literally writing every single program the human can possibly think of, until one happens to solve the problem.

Given the limitations of intelligent agents due to its pattern recognition nature, one of the only possibilities of AI going rogue in the near future is if a certain human or group decides to train robots to carry the goal of behaving and acting with malice. Let’s suppose someone does decide to do so — in this case, they will still ultimately be confined to physical resources and access to the mechanical technology to build these robots, and this problem exists regardless of the existence and progress of AI.

With the current state and direction of artificial intelligence, it is highly unlikely we will be seeing evil killer “robots going down the street killing people” in the near future — at least until new algorithms are developed for computers to learn to reason like humans, or neuroscientists solve how the human brain works (recall that ANNs are a one-trick pony while humans can most certainly learn how to interpret images, sound, and reason, all with the same brain).

But presently, worry about evil killer robots is indeed as irrational as worrying about overpopulation on Mars.

And there are two major problems with worrying about overpopulating Mars:

  1. We are using it as a reason to avoid Martian missions. While the fruits of going to Mars are still unclear, AI has already proven, and continues to prove, its worth on a day to day basis.
  • Email spam filters have been using machine learning methods to clean your inbox for over a decade.
  • Your GPS uses some form of searching technique to plan routes — a development out of AI
  • Your news feed employs some form of unsupervised learning to cluster similar articles, and a recommender system to deliver relevant content
  • Ever major personal digital assistant and chatbot — Alexa, Google, Siri, Cortana…

More recently (and possibly more importantly):

The list goes on, but back to the analogy for point 2 because it is important.

2. Worrying about overpopulation on Mars ignores the very legitimate problem of a recyclable water supply on Mars. And AI brings with it cause for worry — not Terminator, but jobs.

A White House case study on Artificial Intelligence, Automation, and the Economy states:

“2.2 to 3.1 million existing part- and full-time U.S. jobs may be threatened or substantially altered by AV technology.”

Given that autonomous vehicle technology is looking to be introduced to the California market beginning November, the impact of displaced jobs from the transportation industry alone is significant, and far more immediate (just under 150 days from now!) than any possible concerns of malicious robots.

But transportation is hardly the only industry to be affected.

Photo Credit: McKinsey & Company

The potential to automate nearly half of all American jobs exists and is very likely — in fact, US manufacturing output is at record highs despite employing nearly half the workers it used to. A McKinsey & Company report suggests that close to half of all time spent in US occupations can be automated — this list includes cognitive tasks that once required college degrees.

This is a far more dire and urgent problem we need to solve than the hypothetical development of “evil killer robots”, and every moment we spend worrying about irrational fears is time and investment detracted from developing a solution to potentially 80 million American jobs lost or drastically altered through automation.

There are other issues that plague AI as well, such as intrinsic sensitive prejudices a system might inherit from biased training data — creating racist/sexist algorithms, or the potential issues of compromising AI with malicious data and the security concerns associated with adversarial attacks on learning systems.

An image of a panda overlaid with the gradient map of a gibbon deceives the classifier into misclassifying the image. Photo Credit: OpenAI

Tricking a robot into believing a panda is a gibbon might be cute, but it’s not so cute when your car is tricked into running over pedestrians.

In order to ensure that artificial intelligence is being developed for the benefit of humanity as a whole, it is necessary to be open minded to, and wary of concerns —valid concerns grounded in reason, logic and evidence. Concerns of “evil killer robots” do not fall into this category, and more crucially take attention away from other more pressing issues.

For the sake of bringing this conversation in the right direction, can we please have it without mentioning “evil killer robots?” (At least until we have engineered computers to develop the ability to reason?)

Let’s leave the evil killer robots to the movie theater, at least for now