The Real Dangers of Artificial Intelligence

Talin
Machine Words
Published in
5 min readFeb 22, 2016

There’s been a number of stories in the news about the “threat” of artificial intelligence, and a lot of it is nonsense, even some of the stuff coming from people as well-respected as Steven Hawking and Elon Musk.

I think there are real dangers that we should be concerned about. But they aren’t the ones you might imagine from watching Hollywood thrillers.

First, I want to debunk the idea that AIs are an existential threat to humanity.

I don’t think that we are likely to ever build a Skynet or other AI that is going to suddenly decide that humans are no longer needed. For one thing, this would be a very hard thing to build. The kinds of AI that we are building today — or that we are likely to build in the foreseeable future — are lacking in a number of key aspects, which would make it very difficult for these systems to operate without constant assistance from humans.

The first of these is “general intelligence”. Right now all of the so-called “artificial intelligence” technologies are extremely competent within a very restricted domain. Deep learning systems can look at a billion photos and pick out which ones are wedding photos, which is amazing — but those systems can’t play poker, pick stocks, or figure out what a person is going to say next. Self-driving cars are very good at navigating streets full of obstacles, but can’t do anything else. We don’t have a system that can handle the complexity of acting autonomously in arbitrary situations in the real world the way a human can.

Moreover, we’ve made almost zero progress on the problem of introspection — how to make a system that can model its own state. While a self-driving car does have a model of how a car works — this signal activates the throttle, that signal activates the brakes, this one says how much gas is remaining — that model is crafted by humans, and from the AI’s perspective is a piece of unalterable “received knowledge”. The self driving car has no understanding of how it’s mind is put together, and it cannot improve it’s design in any way.

The second characteristic that is lacking is what I would call “general competence”. In order for an autonomous agent to be effective in the world, it needs more than simply the ability to analyze problems — it needs to be able to make accurate predictions of the consequences of its own actions. It needs to be able to collect and analyze sensory data, use that to construct a model of its environment, make changes to the model to see what effect those changes would have, decide which of those possible effects are most satisfying to its programmed goals, and then perform those actions in the real world. Moreover, to be generally competent, that model would have to incorporate knowledge of the rules of physics, chemistry, biology, psychology, economics, sociology, and much more.

The agent would also need a way to deal with the myriad possible futures that might result from the choices it has available — it can’t merely simulate one possible future. Humans can do this, but most of the time we don’t bother to contemplate more than a few possible futures, with a short time horizon and a restricted problem domain. That’s because it’s a hard problem, and we often encounter unintended consequences as a result.

Achieving “superhuman” or “god-like” intelligence — the kind needed to defeat all of humanity — would require the ability to predict the outcomes of many possible scenarios, over a long time and with broad scope. But this kind of long-term, detailed projection is beyond the capabilities of any conceivable computer that we might build. The world is simply too complex.

Because AI systems have been so successful in restricted domains, it’s tempting to think that solving the general case is just around the corner, but it’s not — it’s decades if not centuries away, at least measuring by our current rate of progress.

As an aside, I would also make the prediction that what we’ll see is a gradual widening of the domains of competence of AI. At first, these baby steps will be small and obvious: the self-driving car is extended to other kinds of vehicles. The chess player is extended to other kinds of turn-based board games (this has already been done) and then to games in general.

I’ll go even farther and predict that we might get to what I call “the reductionist AI” as an intermediate stage. This is an agent that understands physics, chemistry, and other “reductionist” sciences in which simple mathematical models can be created and used to manipulate physical reality. At the same time, the reductionist AI is hopelessly incompetent in fields like ecology, economics, or predicting human behavior, because these domains are computationally intractable. Such a machine might be able to operate autonomously in the world to a limited degree, but it’s still going to need help when it runs into problems that it can’t understand.

In other words, it’s still going to need humans around.

We humans will be able to witness this gradual increase in generality, and we’ll be able to anticipate potential problems long before they occur. There won’t be a sudden instant when Colossus suddenly “wakes up” and decides to re-write its own programming. (And even if it did, it would probably crash, since the new behavior is completely untested.)

I also want to note that in no case do any of these systems have emotions. They are not greedy, or ambitious, or hungry — unless we specifically build those things into them. AIs as they currently exist are basically glorified search engines — what they mostly do is search through large problem spaces. Their only motivation is what we program into them.

So does that mean AIs are not dangerous at all? No, absolutely not — there is a real danger, but it’s not coming from the AIs. It’s coming from us.

AIs are formidable tools that can be used for oppression and injustice. Powerful and selfish interests will want to use AIs to further their own ends. Repressive governments would love to use AIs to build a cybernetic panopticon, where every citizen’s movement is tracked in detail, and dissident thoughts are efficiently weeded out. And there are any number of financial wizards with weak moral compasses that would, if given a chance, strip-mine the wealth of the world, leaving the rest of us impoverished.

AIs, if misused by humans, could be a threat to social justice and civil society. In the worst case, they could be the downfall of civilization, creating a new feudalism where the AI-equipped haves completely dominate the have-nots.

And the AIs don’t need to be particularly god-like in order to do this.

--

--

Talin
Machine Words

I’m not a mad scientist. I’m a mad natural philosopher.