What may go wrong when machines become intelligent

Tyler Neylon
6 min readJun 12, 2016

--

Read almost any book about super-human intelligence, and you’ll hear a horror story of grievous harm to all of humanity. There is a great deal of fear around this topic, a good chunk of excitement, and not as much detailed thinking on exactly how things may go wrong and what we can do about it. That’s what this post is about.

I don’t completely agree with the Terminator-like stories harbinging havoc wrought by machines beyond our ken. However, this is where the fear begins, so I’ll begin there as well. Then I’ll back up a bit and explain a path that I think is more likely.

Mortem ex machina

Most of the worry centers around two themes: the ability to do harm of an intelligent system, and the inhumanity of such a system. Terminator-style stories illustrate machines reaching decisions that most humans would avoid, while some storylines depict a machine rebellion in a somewhat more sympathetic light.

Apocalyptic fears expressed in fiction. Inhumanity concern, left: A suggestion of what happens when a machine becomes self-aware from the Terminator series. Ability concern, right: A suggestion of how humans may oppress intelligent machines to the point of revolt, from an animated short in The Matrix universe.

The ability concern is that a powerful enough system may reasonably choose to be harmful to humanity. In this case, we can imagine a human making similar decisions to a group of intelligent non-humans. The reason it feels threatening is that the non-humans have some kind of power over us, such as weapons that can do more damage, or the ability to disarm us. I see the ability concern as something we’re used to. Some groups of humans have had more power than others throughout history.

The inhumanity concern, which is more often expressed in the arena of machine intelligence, is the fear that another intelligence would think about life so differently that it would make a harmful decision that most humans would not.

To see why I’m distancing machine behavior from human behavior with the inhumanity concern, let’s put ourselves in the shoes of a super-human intelligence that feels threatened. Suppose we realize that we were actually created by another species that lives on a distant planet. These other people— let’s call them gumans — are significantly less intelligent than us, but have now returned to earth. Gumans occasionally make bad decisions, and they can be somewhat harmful.

Would you naturally decided that we should commit genocide? Since the intelligence gap is obvious, most humans wouldn’t feel seriously threatened.

I’m not arguing that this fear is unfounded. Rather, I’m trying to shed more light on it so that we can reason about it.

What are the major differences that might occur between a human mode of thinking and a machine’s? A few candidates occur to me:

  • A machine may be less mature, reaching impulsive decisions or acting prematurely.
  • A machine may have no empathy, and thus not value human life.
  • A machine may be ethically skewed, deciding to prioritize goals of self-interest far above any possible goals to avoid harm to others.

If a machine avoided all three of these pitfalls, it would have a similar ethic system to ours, it would care about humanity, and it would not act impulsively. Such conditions would go a long way toward mitigating my own fears along the lines of the inhumanity concern.

Overpopulation on Mars

Let’s back up for a moment, and take a look at how close we are to creating machines that we might recognize as peers.

We’re not close.

Do people feel like they’re talking to a person when they talk to Siri? Not really. Systems like these showcase voice recognition and the ability to perform an extremely small number of simple, one-off tasks. Humanity is not a collection of disparate simple tasks. Humanity is full of passion, creativity, emotion, and ingenuity. If you take the time to look at some of the most impressive algorithms today, you’ll see that they’re still devoid of the synthesis of many components that go into making people what they are.

In the words of Andrew Ng:

I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.

Predictions, fears, and hopes all tend toward extremes. Wildly optimistic predictions see the singularity around the corner, transforming the very fabric of life as we know it with explosive technological advances. Wildly pessimistic predictions see all of humanity dead or enslaved as a result.

I predict a nice but boring future.

Consider the most advanced learning machine we currently know of — the human brain. It takes about a year of interaction for this machine to learn how to speak and understand a few words. After five years of training, such a machine is finally ready to begin kindergarten. Imagine how you’d feel if you had to raise a computer for that long before it could begin a serious education.

Even if we achieve human-level intelligence any time soon, there is something uninspiring about human-level intelligence. And human-level intelligence is many, many years away.

If it takes us so many decades to achieve human-level intelligence, what right do we have to assume we can surpass that mark so quickly?

So — yes— perhaps the concerns about harm by machines is real. But it seems to be too early for us to understand the full logistical details of what exactly may go wrong, and how we can avoid it.

An understated concern

I’d like to add my own concern to those mentioned above. Historically, humanity has not done well when one group has felt threatened by another. Things get even worse we view the other group as inhuman or less than.

These are the ingredients of our worst moments. And these are also the ingredients we have before us when we fear machine intelligence.

If a machine passes the Turing test, is it not a person? I see no reason to treat it otherwise. Does it not deserve the basic rights and respect we would give to any individual?

My concern is that thinking machines that pass the Turing test — a new kind of people—will suffer the consequences of fear, ignorance, and selfishness felt by too many minorities and mistreated groups in the past. I hope that any set of ethics surrounding the advancement of machine intelligence works to alleviate this danger. We should treat people like people.

What we can do

This post has outlined the major concerns about the development of machine intelligence — the inhumanity concern, the ability concern, and this last, which I’ll call the personhood concern. I’ve also outlined that these fears may not crystalize for many decades.

At a high level, we can create a code of research conduct analogous to the Declaration of Helsinki, a codified set of ethical principles toward the regulation of human experimentation. The global acceptance of such principles would help ensure that the advancement of intelligence is performed responsibly. Even if some researchers chose to ignore such a code, their work would be hindered by an ethically-minded community.

More specifically, we can be conscious of the dangers outlined here. We can take measures to avoid allowing an intelligence to have too much power. We can work to instill empathy, ethics, and maturity in our thinking machines. And we can be conscious of how our own treatment of others models and shapes all of our futures.

--

--

Tyler Neylon

Founder of Unbox Research. Machine learning engineer. Previously at Primer, Medium, Google.