Are you afraid of AI becoming evil?

Brett Schilke
SingularityU
Published in
3 min readMar 31, 2017

--

A good part of my work can be summed up as talking about fascinating things with curious people. Q&A time after I give a talk is probably my favorite part of what I do. I’ve been fact-checked live on stage (I was indeed wrong), posed with complex questions about ethics, and often faced with questions that I just can’t stop thinking about.

The most common question I receive also happens to be my favorite. It was most recently asked after I spoke at the Three Dot Dash summit in New York City by a 14-year-old wearing a sick bowtie:

“Are you afraid of AI taking over and becoming evil?”

My response to this question is a brief thought exercise:

We are talking about something that is in its infancy. Getting it started was easy, but most of its functions are still rife with inconsistency, full of mistakes and errors. Yet we see it learn every day, developing new connections and frameworks in ways we can’t really understand. We know that soon, it will be able to stand on its own two feet, it will begin to learn much more rapidly, it will know more than us, it will be capable of doing more than us, and it will seek opportunities, find solutions, tackle challenges, and achieve things in the world in ways we haven’t yet imagined.

And maybe, just maybe, it will turn out to be a jerk.

Now tell me: did I just describe Artificial Intelligence or a human child?

This is a decidedly humanistic view on the development of AI. If we approach a rapidly developing technology in much the same way as we approach our own reproduction, perhaps we can remove fear from the equation.

Humans procreate constantly (pumping out about 228,000 new humans daily, in fact), and we hope beyond hope that our children will be smarter and more powerful than us in every way. As we age, we even submit to their leadership precisely because they have greater knowledge, power, and ability. Yet we don’t mire ourselves in fear that our kids will turn bad.

Of course our children still do sometimes end up as criminals, psychopaths, swindlers, dictators, and lawyers. But we have built educational, legal, moral, and ethical systems that guide their upbringing, and have paired those with judicial and rehabilitation systems as a backup plan for the ones that fall through the cracks.

Tay did not make the list.

There’s a principle in computer engineering that states when a component in a system changes by an order of magnitude, it often necessitates a redesign of the fundamental building blocks of that system.

So, back to our analogy of Artificial Intelligence as a child, the challenge we have before us is the very real likelihood that our “child” in this case will not just be incrementally better, faster, and stronger, but rather an order of magnitude more capable. It follows, then, that a redesign of fundamental social systems may be in order to prepare for that future.

People with more letters behind their names than I have are speaking out about the importance of carefully guiding the beneficial development of Artificial Intelligence, and Elon Musk is even launching a new company that will focus on brain-computer interfaces to help humans keep up with the pace of technological change. As the topic becomes more mainstream, the complexities continue to deepen.

So just as we dream for our children to surpass us, we can dream for our technology to do the same. Let’s raise AI right, push it in the best directions, and admit that, yes, it may outpace us and might even run amok.

To get to the real issues, we need to shift the conversation from one of fear to one of preparation, asking instead how to adapt the systems we rely on to ensure that this child grows up to contribute positively to society.

--

--

Brett Schilke
SingularityU

Strategist and storyteller for t̶h̶e̶ ̶f̶u̶t̶u̶r̶e̶ today.