The Case for Ethics in AI

Balancing the risks and benefits of the ultimate technology

Nick White
4 min readJun 13, 2017
An image from 2001: A Space Odyssey. Released in 1968, it was one of the first films to portray the dangers of AI.

The development of Artificial Intelligence promises to give humankind new power, but like many technologies before it, AI is a double-edged sword. On one hand, AI could add economic value of $50 trillion by 2025. On the other hand, AI may severely disrupt our social, economic, and political systems or perhaps even cause the extinction of the human race. Indeed, the dangers of AI may outweigh the benefits.

In order to chart a course that will enable us to reap the benefits of AI while mitigating the risks as much as possible, there are crucial safety measures we must take, ethical questions we must answer, and economic planning we must take into account. This spans the subjects of law, politics, economics, physics, mathematics, computer science and more. In fact, developing these guidelines may be more difficult than developing AI itself and unlike in engineering problems, there is no clear right answer.

One thing is indisputable. Research into the economics, ethics and safety of AI must catch up to and keep pace with the research into the technology. Thankfully this is starting to happen. Deepmind recently published a paper in collaboration with the Future of Humanity Institute titled “Safely Interruptible Agents,” which outlines methods of keeping a safe off-switch for any reinforcement learning AI agent that gets out of control. Yet as new algorithmic methods are invented, we’ll need new techniques to keep them safe.

In a similar vein, as AI gains new capabilities and further transforms the economy, we’ll need new policies to keep the world from coming unglued. People have proposed rapid re-education for new jobs as a way to get workers displaced by AI back into the workforce. But what happens when we have Strong-AI that is better than a human at any cognitive task? There may not be any new jobs to re-educate for at all. Such a situation is long in the future, but the beginning of the economic impacts of AI are not far away. We need to start planning now.

As we hand over more and more control to autonomous AI agents, it is inevitable that these agents will be faced with difficult ethical decisions. In the famous “Trolley Problem,” one must decide whether or not to steer a trolley to kill one person in order to save the lives of five others. If an autonomous car encounters the same situation, how should it decide who to kill? Furthermore, if the autonomous car kills someone, who is legally responsible?

The most fearsome and least-well understood danger occurs when AI becomes able to increase its own intelligence, causing an exponential explosion in its intelligence. At this moment, the AI becomes so capable that it cannot be stopped and can act so quickly that we may not have time to hit the off-switch. Such an AI may have an innocent objective like Nick Bostrom’s hypothetical paperclip-making machine, but if its objective is not aligned with our own, it will think nothing of killing everything in order to achieve its goal.

Dave manually disables the rogue intelligent computer system HAL in 2001: A Space Odyssey.

If you think the idea of an AI apocalypse is absurd, you’re not alone. Many authorities in the AI community doubt that an “intelligence explosion” would be dangerous or could happen at all. However, a recent survey of AI experts demonstrates there is cause for concern. According to the study, 40% believe that the “intelligence explosion” argument is valid and 70% believe that superintelligent AI could be a threat. In addition, 36% think research in AI safety is as important or more important than research into general AI.

To avoid the AI Armageddon, we must make sure that any AI’s objectives are aligned with our own. This is a difficult problem to solve but with research into game theory and decision theory we could potentially develop mathematical methods of proving that an AI algorithm is safe. To test our theories we could deploy these agents in isolated simulations so that they are confined to a world from which they cannot escape and in which they can do no harm.

But that still leaves us with the hardest questions of all. In order to know if an AI’s objective are aligned with our own, we must first know what our objectives are. So, what are our objectives? As individuals, as a species? Furthermore, when strong-AI is able to do all our work for us, what will be our purpose? Is this a future that we want, or should we stop AI development before that happens?

Elon Musk famously said, “with artificial intelligence we are summoning the demon.” It sounds dramatic, but it could be true. Others like Ray Kurzweil predict that AI will bring about a transcendent singularity. Whatever you believe, superintelligent AI will be the last technology we ever create. So we’d better get it right the first time.

Many thanks to Jaan Tallinn for his ideas and guidance throughout the writing of this piece. If you liked this piece follow me on Medium and Twitter for more insights on AI.

--

--

Nick White

Co-Founder at Harmony, a high-throughput, low-latency and low-fee consensus platform | We are hiring! Apply at http://harmony.one/jobs