How risky is AI? Why experts disagree.

Kirsten Horton
The Startup
Published in
9 min readDec 27, 2017

“[When superintelligent AI emerges], there will be carnivores and there will be herbivores. We’ll be the plants.” — J Storrs Hall, Beyond AI

If you’ve been on Twitter for a while, you may have seen tech entrepreneur Elon Musk tweeting a warning about AI. It probably looked something like this:

And, if you’re anything like me, you thought, “Yeah, I am a little worried about Artificial Intelligence. People are saying it might increase unemployment or discrimination. But I am A LOT more concerned about North Korea.”

The suggestion that artificial intelligence is going to enslave or exterminate us sounds like a scifi movie. But that’s what Elon Musk and other intelligent people, like Oxford professor Nick Bostrom and Microsoft CEO Bill Gates, have been suggesting. Very smart people say that artificial intelligence could become smarter than humanity and destroy us all.

And three months ago, I was sure they were wildly wrong.

But after reading and reading and listening and reading, I’ve changed my mind. Worrying that AI will kill us all isn’t ridiculous or absurd. It’s smart.

What we don’t know could hurt us.

Because AI is such a fast-growing field, and because people are so bad at predicting the future, there’s a lot of uncertainty around the future of AI.

Every prediction about the future of AI — from “AI is safe” to “AI will be our downfall” — depends on a few key questions that no one knows the answers to. And because the experts on each of these questions disagree, it makes sense that smart people come to very different conclusions. If two intelligent people disagree about the risks from AI, it’s probably because they disagree on a key prediction about the future.

Three of the main disagreements in AI are the timeframe, the future users, and the degree of difficulty in making AI understand our values.

How long will it take for human-level AI to become superintelligent AI?

“They have zero intelligence. We have no idea how to implement intelligence in any machine — not the kind of intelligence you and I have, or even a dog has. Anything else is speculation.” — Luciano Floridi

I was recently at a talk about the use of artificial intelligence in industry. The presenter pointed out that, after decades of research, AI hasn’t even reached human-level intelligence. We have a long way to go before worrying about an artificial superintelligence.

She was assuming we can predict how long it will take to improve AI in the future based on how long improvements have taken in the past. But Oxford professor Nick Bostrom says that could be wrong. In his book Superintelligence, Bostrom suggests that a human-level AI could start improving itself. We’ve already seen artificial intelligence become the best in the world at Go; who says an AI can’t be the best in the world at computer programming?

If artificial intelligence can improve itself, it might not take very long at all before we have a superintelligent AI. After all, we’ve been surprised before by how quickly new technologies have emerged.

The truth is, we don’t know how long it’s going to be before we reach human-level AI, let alone superintelligent AI. Field-leading experts have completely different opinions about when AI will reach human level.

The light grey lines represent each experts’ predictions about when we’ll reach Human Level Machine Intelligence. Opinions vary widely.

So what should we do if we have no idea when machines will reach human intelligence, let alone a world-dominating superintelligence? Some people say machines will never even reach human intelligence, so AI safety is a non-issue. Others say we should wait and see how AI research progresses. When we get close to human level AI, then we should act. Finally, some suggest we take action now, because there will never be a clear warning that superintelligence is around the corner.

Who is going to be using the AI?

Boris Grishenko: Bond villain, computer programmer, and voted “Most Likely to Misuse AI” in high school.

Another important factor in AI safety is who’s going to be using the AI. Will superintelligent AI be restricted to the United Nations? Will corporations and private citizens be able to use it? Could terrorists or hackers use artificial intelligence to help them with their plans?

Artificial intelligence is a tool that can be used for a variety of purposes. It’s not good or bad on its own, but it can be used to achieve your goals, whatever they are. AI could improve government transparency and find a cure for cancer, or to find weak spots in the FBI security system. It all depends on who’s using it.

People who disagree about AI risks usually agree that AI will be used to help and harm. Their argument is about whether it will do more harm than good.

Facebook CEO Mark Zuckerberg has made waves for saying that artificial intelligence should not be slowed down. He acknowledges that AI can be used for both good and evil, but he focuses on the good.

“Whenever I hear people saying AI is going to hurt people in the future I think: Yeah, technology can generally always be used for good and bad and you need to be careful about how you build it and you need to be careful about what you build and how it’s going to be used. But … [i]f you’re arguing against AI then you’re arguing against safer cars that aren’t going to have accidents and you’re arguing against being able to better diagnose people when they’re sick.”

Musk tweets, “Competition for AI superiority at national level most likely cause of WW3 [in my opinion].”

On the other hand, Elon Musk focuses on what AI could do in the hands of someone like Vladimir Putin.

Musk (rightly) assumes that AI could shift the global balance of power. He then goes on to suggest that an automated weapons system could decide a preemptive strike is necessary for a country’s safety. An AI could fire missiles (or nukes) and kills millions (or billions) without a single human involved in the decision.

We know AI will be used for good, evil, and downright shady purposes. So will AI be better at helping people or hurting people? Your answer likely determines whether you’re worried about artificial intelligence safety.

How hard will it be to give AI a “conscience”?

“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” — Stephen Hawking

Imagine you own a paperclip factory. You create an artificially intelligent machine and instruct it to make as many paperclips as possible.

After a few days, your paperclip machine sends you an email: “If you connect me to the Internet, I can learn to repair myself.”

You connect your AI to the Internet and your paperclip machine learns to repair itself and make new paperclip machines. Now it can make paperclips faster than metal gets trucked in to the factory.

A week later, you learn your paperclip company now controls all the metal mines in the world and is starting new mining operations in every country to make more paperclips. News anchors predict massive shortages. You start to get worried.

A week after that, every person, plant, and animal has been turned into a paperclip. Your paperclip-maximizing machine has carried out your instructions — “Make as many paperclips as possible” — to the letter.

We like to pretend that AI is a being with motives and intentions, but the truth is it’s just following our instructions. AI doesn’t have a conscience. So if you give an artificial intelligence an instruction, it will follow it to the letter — even if it means doing some despicable things in the process.

In the scifi classic I, Robot, robot creators try to get around this problem with three laws:

In I, Robot, an artificial intelligence tries to protect people by taking away their freedom.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But — spoiler alert — these laws don’t stop the robots from going rogue.

Humans don’t even know their own values. They certainly can’t agree on one value system to govern a whole society. So programming a conscience into an artificial intelligence is not an easy task.

University of California, Berkeley professor Dr. Stuart Russell calls this the “Value Alignment Problem.” Machines should follow our intentions, not our explicit instructions, but we’re not sure how to make that happen.

If you’re optimistic about value alignment, you’re probably optimistic about the future of artificial intelligence in general. Maybe you think we’ll have it figured out before we reach human-level artificial intelligence. But value alignment is very difficult and it’s hard to tell how close we are to a good solution.

So what’s at stake?

If superintelligence is impossible or will inevitably be good, then Mark Zuckerberg is right. Arguing for AI safety is “arguing against being able to better diagnose people when they’re sick.” All this talk about regulations and ethics and safety is just slowing down cancer diagnoses and safer cars for no reason.

At worst, people starting paying attention to Elon Musk’s tweets and Nick Bostrom’s book and panic. We become a generation of Luddites. Governments around the world ban artificial intelligence, and we miss out on safer roads, better medical care, improved scientific research, and increased economic productivity that could lift millions out of poverty.

But if superintelligence is possible, and if there is a risk it will hurt people, the risks are very different. Instead of risking missing out on human progress, we’re risking losing all humanity. That risk might be very small, but if you believe AI can achieve super-human intelligence and act in a harmful way, than you believe the risk exists.

Experts disagree about AI timelines, users, and values, so it’s hard to estimate the risk of humanity’s collapse due to AI.

Think Human Level Machine Intelligence is right around the corner, with superintelligence hot on its heels? Then you might say there’s a 1 in 10 chance of AI causing human extinction in your lifetime.

Think Human Level Machine Intelligence is still decades away, and superintelligence centuries on? Then you might think there’s almost no chance of an AI causing human extinction in your lifetime — maybe a 1 in 100,000 chance.

Both of those predictions make sense and could be made by an intelligent person. But because the predictions are so different, it’s hard to know for sure what the real risks are. That’s why people calling for AI safety want to make a plan now for dealing with the value alignment problem and keeping AI out of dangerous hand. Because we don’t know when or how superintelligent AI will develop.

AI can, and probably will, transform our world for the better. We can’t stop AI. And even if we could, most of us wouldn’t want to.

But we can, and should, make a plan for the worst-case scenario. It’s the intelligent thing to do.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 278,629+ people.

Subscribe to receive our top stories here.

--

--

Kirsten Horton
The Startup

Canadian-born Londoner. I try to leave things better than I found them.