It’s Time to Intelligently Discuss Artificial Intelligence

I am an AI researcher and I’m not scared. Here’s why.

Oren Etzioni
Backchannel
Published in
5 min readDec 9, 2014

--

Some people have long regarded artificial intelligence (AI) as a threat. But lately that view has gained currency from some unexpected quarters.

Tesla CEO Elon Musk worries it is “potentially more dangerous than nukes.” Physicist Stephen Hawking warns, “AI could be a big danger in the not-too-distant future.” Fear mongering about AI has also hit the box office in recent films such as Her and Transcendence.

So as an active researcher in the field for over 20 years, and now the CEO of the Allen Institute for Artificial Intelligence, why am I not afraid?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations. A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets, or keeping up to date on medical research. Like calculators, AI tools require human input and human directions.

Now, autonomous computer programs exist and some are scary — such as viruses or cyber-weapons. But they are not intelligent. And most intelligent software is highly specialized; the program that can beat humans in narrow tasks, such as playing Jeopardy, has zero autonomy. IBM’s Watson is not champing at the bit to take on Wheel of Fortune next. Moreover, AI software is not conscious. As the philosopher John Searle put it, “Watson doesn't know it won Jeopardy!”

Anti-AI sentiment is often couched in hypothetical terms, as in Hawking’s recent comment that “The development of full artificial intelligence could spell the end of the human race.” The problem with hypothetical statements is that they ignore reality—the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

So where does this confusion between autonomy and intelligence come from? From our fears of becoming irrelevant in the world. If AI (and its cousin, automation) takes over our jobs, then what meaning (to say nothing of income) will we have as a species? Since Mary Shelley’s Frankenstein, we have been afraid of mechanical men, and according to Isaac Asimov’s Robot novels, we will probably become even more afraid as mechanical men become closer to us, a phenomenon he called the Frankenstein Complex.

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

The mechanical loom and the calculator have shown us that technology is both disruptive and filled with opportunities. But it would be hard to find a decent argument that we would have been better off without these inventions. Better we make sure that our new technology is focused on what good it can do than fear it for how it may be misused. And AI has far more potential to enhance our abilities than to make us redundant.

For example, researchers are working hard to develop AI as a powerful enabling technology for scientists, doctors and other knowledge workers. According to the Journal of the Association for Information Science and Technology, the global scientific output doubles every nine years. A mere human can no longer keep up, and search engines such as Google Scholar simply point us at a vast ocean of advances that no human has the time or mental resources to wade through. We need intelligent software that can answer questions such as, “what are the side effects of X drug in middle-aged women?” or at least identify a small number of relevant papers in response. We need software that can track new scientific publications and flag important ones, not based on keywords, but based on some level of understanding of the key information in the papers. That’s augmented expertise, and it’s a positive goal that I and other AI researchers are aiming at.

We’re at a very early stage in AI research. Our current software programs cannot even read elementary school textbooks, nor pass science tests for fourth-graders. Our AI efforts today lack basic common-sense knowledge (gravity pulls objects toward earth) and cannot understand without ambiguity seemingly simple sentences such as: “I threw a ball at the window and it broke.”

We have challenging technical work to do and, frankly, both the fear mongering and the grandstanding are missing the point: Much of what is easy for an average human child is extremely difficult for AI software — and will be for many years to come. We humans are a lot smarter than we look!

Of course, in this world of viruses, cyber-crime and cyber-weapons, I welcome an open and vigorous debate about what level of autonomy to grant computers, but that debate is not about AI research. If unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity—and even save lives. Allowing fear to guide us is not intelligent.

Follow Backchannel: Twitter | Facebook

Unlisted

--

--

Oren Etzioni
Backchannel

CEO, AIlen Institute for AI (AI2); Professor UW CSE; Venture Partner, Madrona.