Loading…
0:00
8:43

“I don’t think there’s a need to panic, but…the people who say ‘Let’s not worry at all,’ I don’t agree with that.” — Bill Gates
“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast — it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. Ten years at most.” — Elon Musk
“The development of full artificial intelligence could spell the end of the human race…It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” — Stephen Hawking

We’ve Been Warned

Bill Gates, Elon Musk, Stephen Hawking — when those brilliant visionaries share an opinion on something, it’s probably best to listen. And before you deride these as claims of only three men among many tech leaders, consider this: Many experts did not believe artificial intelligence would defeat the top human Go players for another 10 years, but it happened in 2016. It’s worth emphasizing: People who have devoted their entire lives to working on AI did not believe this could happen for at least 10 years. So we should pay attention when some experts question the threat of AI and its pace of development.

But enough with the doom and gloom; the sun will rise again tomorrow. The clock has yet to strike midnight, but it is ticking. What can we do to prepare? We need to set up preemptive standards, think tanks, regulations, and private-public partnerships.

Current State of Protection from AI

There have been multiple responses to the potential threat of advanced AI from academic and business leaders. Two notable groups keeping a close watch are the Partnership on AI and the Future of Life Institute. The latter has more than 3,000 prominent AI researchers and business leaders signed on for its Asilomar AI Principles. This pledge highlights the ethical considerations needed for advancing AI responsibly. Meanwhile, the Partnership on AI — which counts among it founding partners Google, Amazon, Apple, Facebook, Microsoft, and IBM — has centered its goals around thought leadership and engagement in the AI community. The work being done by groups like these is a good start, but they seem to lack an enforceability behind their promises.

Individual companies are also contributing. DeepMind — arguably the world leader in AI research — allowed Google to acquire it in 2014 under the condition that Google set up an independent ethics board to oversee its activities. (Three years on, however, little is known about this board.) In addition, DeepMind confirmed in July 2017 that the company is working to add “imagination” into its AI so that it can better plan to avoid bad situations by imagining good and bad outcomes. In the same way parents and teachers tell children to “think before you act,” AI would consider consequences and choose against a certain action if it could lead to harm, for instance.

Okay, so some steps have been taken in the right direction, but in all of these cases the lack of transparency is concerning. The issue with the above groups is that there is always a profit motive that creates a bad incentive structure for AI development. Essentially, it’s an arms race. No company wants to be left behind by a rival, and justifiably so.

Recognizing this, in 2015, tech industry veterans founded OpenAI, a $1 billion funded think tank. The expressed goal of this organization is to research AI without a commercial objective for the purpose of ensuring that it develops in a way that will be beneficial to humanity. In short, the founding of OpenAI acknowledges that ethics have not kept up with technological progress. And ethics is needed at the core of artificial intelligence in the future to ensure programs more harmful than Tay are prevented before they begin. But this is still not enough: Private citizens control “only” billions of dollars, but governments control trillions of dollars and legal systems. As such, there needs to be government oversight to ensure proper caution is administered.

A Very Real Threat

A similar rationale led tech CEOs such as Elon Musk to petition U.S. government leaders to get out in front and regulate. In a July 2017 governors’ summit, Musk raised his concerns:

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

And that is the rub. Getting people to care about ethics, let alone ethics in AI, is difficult. It is hard for most of us to grasp the idea of a computer program wreaking havoc without a physical presence. Musk further went on to share this hypothetical scenario concerning the Malaysian plane that was shot down over Ukraine in 2014:

“If you had an AI where the AI’s goal was to maximize the value of a portfolio of stocks, one of the ways to maximize the value would be to go long on defense, short on consumer, start a war…Hack into the Malaysian Airlines aircraft routing server, route it over a war zone, then send an anonymous tip that an enemy aircraft is flying overhead right now.”

It makes the threat of AI seem very real and very simple, doesn’t it? And that is with a hypothetical AI that had seemingly benign intentions. Imagine if nefarious intentions were deliberately programmed. That is why now is the time for government leaders to work with industry and academic leaders in the fields of AI and ethics to establish proper regulations and partnerships in order to prepare for the existential risk that is AI.

Government’s Response: Present and Future

Private organizations are not the only ones to start paying attention to AI. Both the Obama administration, with its “Preparing for the Future of Artificial Intelligence” report, and the Chinese government, which recently put forth the goal of becoming the world leader in AI by 2030, sent clear signals that this is an issue that can’t be ignored. However, it’s important to note that in both of these cases, the focus is primarily on economic outcomes. This makes sense because government’s responsibility is to its people, and people care about if they’ll still be able to support their families with AI threatening many occupations. Legislation around economic impact is important, but for now the threat of AI in and of itself seems to be more of an afterthought. This makes sense because politicians are used to enacting regulation after issues arise with technology. It is hard to predict the negative effects of every new technology, after all; however, this will not suffice with AI. Once the negative effects become abundantly clear, it may be too late for government intervention.

That is why politicians need to work with technology experts to be preemptive with regulation in this area. As a bonus, this will allow policymakers to better understand the nature of AI from a fundamental level, which will aid them with the economic changes AI is likely to cause.

So what does this all mean for the everyday person who doesn’t work in the field of AI, ethics, or politics? Well, we have two options. Sit, wait, and cross our fingers that it all works out, or educate ourselves, raise awareness in our communities, and ask our representatives to properly regulate AI development. And if you’ve made it to the end of this post (and series), then you know more about AI development than 95 percent of the population. That knowledge is power. Now the choice is up to you whether you will take the responsibility that comes along with that power and help play a small part to make sure our AI future is a good one.