Is AI the greatest social issue of our time? — Part 2

What does tomorrow hold?

Freeformers
6 min readJan 27, 2017

--

Author: Holly Morgan, Lead Coach at Freeformers

This is Part 2 of a two-part article. Part 1 can be found here.

Tomorrow

Humans are at the top of the food chain because we’re the smartest creatures on Earth. For now.

What if AI becomes smart enough to not only outperform us in many of our jobs, but to surpass human capabilities in most or all domains… including in AI design? Such AI would be capable of recursive self-improvement, potentially leading very rapidly to a machine superintelligence vastly more intelligent than the smartest humans.

Would we be able to control this superintelligence? If not, what will it do? And imagine the power of a superintelligence embedded into an extensive network of internet-enabled devices… vehicles, pacemakers, trading systems, autonomous weapons… Many experts believe that human-level AI could be developed in the coming decades. If we haven’t managed to program human values into AI by that point the results could be catastrophic.

The challenge we potentially face is not the one Hollywood would have us believe we face. The danger is not so much that AI will “turn on us”; the danger is that it will be indifferent to us. If we fail to codify our ethics with sufficient precision, a powerful superintelligence could commit atrocities in the pursuit of its programmed goals. The classic example is that of the paperclip maximiser: an AI programmed to maximise the production of paperclips which then converts everything in its wake — including human organisms — into paperclips. Or imagine an AI programmed to “stop all wars” which then proceeds to achieve this by wiping out the human race (dead people can’t fight).

An AI genie might be extremely effective at giving us what we ask for, but not what we really want. How the goals of a powerful superintelligence are defined could be the difference between a world rid of poverty and disease and a world rid of us. So we need to be able to specify what humans value very precisely… something philosophers have tried and failed to do for thousands of years.

Existential risk

So back to my original question: Is AI the greatest social issue of our time? Arguably the greatest social issues of our time are those that pose an existential risk i.e. that threaten the very existence of humanity. At a recent talk, Skype co-founder Jaan Tallinn paraphrased philosopher Derek Parfit:

100% of humanity dying is a lot worse than 90%…and the difference is not 10%.

The reason is that 100% of humanity dying prevents the existence of an unimaginably large number of future generations from ever existing. The harm done by preventing countless generations from ever existing — whether through catastrophic climate change, nuclear war or unsafe AI — can arguably render any existential risk a trump card when it comes to our greatest social issues because of the sheer numbers of potential lives at risk.

Sanity Check

It’s easy to dismiss these worries as far-fetched nonsense. I certainly did when I first heard them in 2009. But this is no longer a fringe concern. Alongside Stephen Hawking, the importance of the potential for powerful superintelligence has been stressed by luminaries from Elon Musk:

If you were a hedge fund or private equity fund and you said, ‘Well, all I want my AI to do is maximize the value of my portfolio,’ then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war.

…to Bill Gates:

I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

…from Tim Urban:

…to Sam Harris:

However, many people in the tech community still consider such worries about superintelligence overblown. For example, at this year’s Web Summit I attended a session called “Farewell, the age of human supremacy” in which both panelists dismissed concerns about the dangers of machine superintelligence. One of those panelists was Andrew McAfee, who is fond of the comparison between worrying about dangerous AI and worrying about overpopulation on Mars.

But the threat with AI is astronomically greater (remember the trump card?) and the scenario in which things progress too fast for us to keep up is much more plausible. AI researcher Stuart Russell reminds us when discussing the possibility of “species-ending” AI,

the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.

I was pleased to see McAfee challenged by chair Izabella Kaminska at Web Summit: “Are you saying that if it’s difficult to get consensus on ethical guidelines for AI then we shouldn’t bother?” If a task is important enough, terribly challenging though it may be, perhaps we have no choice but to try.

Perhaps, though, what’s really going on here is a lack of imagination. Species-ending AI couldn’t really happen, could it? Not in the real world? AI researcher Eliezer Yudkowsky discusses this in the context of the recent US election and points out that there are no “nebulous forces” stopping things from getting much, much worse in the world. It takes a lot of work to keep things relatively stable, and a powerful disruptive force like a superintelligence can do a lot of damage. We’ve forgotten that in history countless times before and we’re in danger of forgetting it again.

What now?

If we take a step back and think on a long-term, global scale, we can see that “it’s robots, not immigrants, that are stealing jobs”. We need to do what we can to keep everyone relevant in the Future of Work, at perhaps a faster pace than ever before, and we need to prepare a backup plan for if we can’t keep up.

We also need to make sure that key decision-makers and the tech community are engaging with the issues explored in this article. Ethics should be a core part of any AI-related curriculum and influential figures in both the private and public sector should recognise the duty they have to use AI responsibly.

And if the threat of superintelligence feels overwhelming, don’t lose hope. A number of groups have formed recently to tackle the challenge of keeping any machine superintelligence human-friendly, such as the Partnership on AI to benefit people and society. One member of the Partnership — AlphaGo creators DeepMind — have hired three AI safety experts in recent weeks. The Executive Director of another of these groups — the Centre for the Study of Existential Risk — reminded us last month that existential risk was just “one of the funny interests of a couple of academics a few years ago”. It’s exciting to see things moving in the right direction. However, if you don’t think they are moving fast enough, Cari Tuna’s Open Philanthropy Project makes a compelling case for such groups making outstanding philanthropic opportunities. The Future of Life Institute also lists various other ways to contribute to a solution: volunteering, outreach, careers and research.

If we manage to overcome these challenges, the Fourth Industrial Revolution has the potential for incredible advancements in everything from health to education to sustainability. Future generations may look back on pre-AI civilisation the way we now look back on the dark ages.

So is Stephen Hawking right? Is AI likely to be either the best or worst thing ever to happen to humanity? Very possibly. And I’m confident that there’s at least huge value in getting it right.

Freeformers is the digital growth partner to FTSE 100 organisations, working with a diverse array of clients including Barclays, HSBC, John Lewis, Camelot and E.ON. Our mission: to create the future workforce, now.

--

--

Freeformers

Freeformers are shaping the future of work, improving the Employee Experience (EX) using Customer Experience (CX) design principles.