Future AI Safety 101

AmeliorMate
5 min readMay 2, 2019

This article is adapted from the last section of a previous AI safety article by AmeliorMate CEO Katie Evanko-Douglas

Current problems in artificial intelligence and machine learning (AI/ML) regarding weak systems are important enough it’s reasonable to put most of our focus there. But it’s also good to have a general idea about where we are headed regarding strong AI in order to keep an eye on it.

While narrow AI addresses and learns about a specific or narrow topic, general AI is more like a human brain without the biological constraints, capable of learning about anything and everything.

Nick Bostrom is a philosopher who provides a great first introduction to superintelligence because he’s good at clearly laying out the dangers for non-technical people. You are encouraged to watch his TED talk below and to read his book if it’s a topic that really interests you.

In Sum

  • Once general AI reaches human-level intelligence, it won’t stop there and hang out. It will become so intelligent we can’t even comprehend it at this point in time, pretty much instantaneously. The picture below shows the scale of human intelligence as we perceive it, with a large gap between the most and least intelligent humans.

But general AI will make this gap look laughably small and we will hardly be more intelligent than a mouse in comparison.

  • General AI is the last inventions humans will ever need to make because it will be better at inventing than we are” and it’s quite likely our future will be shaped by the preferences of AI we create.
  • “We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized.
  • This could be a bit like the tale of King Midas. His objective was to turn everything to gold, but he didn’t think through the ramifications of having a highly effective optimization process.
  • As an example of unintended side effects, what if we tell the AI to make every human smile and in order to be most effective it sticks electrodes into all of our faces to force us to physically smile while also bringing us pain and suffering?

It may seem like an absurd example but because general AI would be so powerful, the unintended consequences of giving superintelligent AI a task would spread rapidly to every human. So it’s quite a serious matter.

In some aspects the concerns about general AI are not unlike the concerns surrounding narrow AI. Many revolve around unintended side effects and malicious actors. Though these examples assume we even still have control over the AI and it listens when it give it commands.

The next danger is such a powerful AI getting into the hands of one or a small group of malicious actors. This is not an abstract concern. Hostile foreign actors are already thinking about it.

Superintelligent AI developed in secret and controlled by only a small group of malicious actors in order to basically enslave the rest of the world is one of the issues OpenAI aims to solve. They describe their mission as

To build safe AGI (Artificial General Intelligence), and ensure AGI’s benefits are as widely and evenly distributed as possible.

The logic being, something so powerful should not be controlled by a small subset of humans. To be safe, it should be distributed equally to all humans.

OpenAI’s transparency and caution provide a good model for such research. As we move into the future it’s clear more and more groups are going to work on AGI and the stakes are too high to risk countries developing AGI in silos as an arms race. It is imperative we figure out as a species how we’re going to handle this issue before it gets to that point.

A final concern about superintelligent AI is essentially about it becoming sentient. Many of the issues discussed previously would occur whether or not a superintelligent AI became it’s own distinct sentient being with its own set of preferences and desires.

Though the idea of humans creating new conscious beings is a bit biazarre to think about, it begs the question, if we are no longer the most intelligent being in the world, what will become of us? If our future is shaped solely by preferences of the AI it would be good know, will it be benevolent or will it decide to wipe us out? What will it want and how will it behave?

Nobody knows the answers to these questions just as nobody knows how long it will take to develop AGI. Current estimates seem to range between 10 and 100 years.

Nick Bostrom has a beautiful explanation of the implications of the situation whatever the timescale.

We should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. The answer is to figure out how to create superintelligent AI such that even if, when, it escapes, it is still safe because it is fundamentally on our side because it shares our values.

This is why AmeliorMate’s moonshot revolves around the need to uncover the laws of social physics in advance so we have concrete values to teach the AI and it’s clear when a side effect of an objective harms the wellbeing of a single human or group of humans and how they function together as a network and we have a practical way of having it implement not doing harm, which can then inform its behavior above all else.

--

--

AmeliorMate
0 Followers

Helping humanity adjust and thrive in the 21st century www.ameliormate.com