Is AI the greatest social issue of our time?

MJ
8 min readJan 5, 2017

--

This article originally appeared at freeformers.com.

[Artificial intelligence] is likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.

– Stephen Hawking, 2015

A colleague opened with this quotation earlier this year when writing about what artificial intelligence (AI) is. Today I want to take Stephen Hawking’s claim and ask: Is he right?

Today

What are some of the challenges and opportunities AI is already bringing?

It is hard to think of an area of our lives where AI could not help us to make dramatic improvements. Agriculture, energy, engineering, finance, government, logistics, entertainment, physical and mental health, education, research, communication, environmental sustainability… all of these domains can benefit hugely from software that is capable of behaving ‘intelligently’ — applying existing knowledge to new circumstances and exhibiting some level of creativity.

But of course one major challenge that faces us is unemployment: If AI can do our jobs so much better than us, what’s left for us to do? As AI becomes increasingly sophisticated, more and more jobs are susceptible to automation and the further human society falls behind the pace of digitisation, the more severe and prolonged the resulting unemployment will be. We are all familiar with self-service machines and have at least read about autonomous cars, but automation is even starting to encroach on areas such as law and management. Will we be able to transition the displaced workforce fast enough to keep them relevant in an economy with robot lawyers and algorithm bosses? The nature of jobs has always changed over time, but as the pace of technological progress quickens, so too must the speed with which we retrain the workforce (perhaps using AI itself).

At the same time, we can get excited about what the new jobs will look like. There will be more value placed on emotional intelligence, creativity and ethics — arguably some of the most fulfilling skills to exercise at work.

And perhaps there will be more jobs. Or perhaps there will be less. Suppose no amount of retraining is able to maintain or increase current employment rates and we face the challenge of keeping ever greater numbers of people financially supported and fulfilled. What then? The wealth created by widespread use of AI could potentially solve the financial challenge, and a life of leisure certainly sounds more appealing than many jobs. But employment serves a wide variety of goals. Gainful employment provides people with purpose, making people feel valuable, and also has wider social benefits such as improving health and reducing crime. How great is the social cost of failing to fill that void? Will people attempt to fill it in socially destructive or socially valuable ways? These are questions we need to be asking ourselves.

AI also has the potential to reduce stereotyping and create a fairer world, so long as we avoid programming our personal unconscious biases into AI software. An example of our failure to do this was the case of the world’s first AI-driven beauty contest this year: only one of the 44 winners was dark-skinned. The problem seems to have been that the image samples selected to train the AI lacked racial diversity.

Another issue is surveillance. The rise of communication technology — from phone to email to texting — has made possible for the first time mass surveillance of the population by the government. This is currently limited by the ability to analyse the data, but the rise of AI could allow for a sophisticated automated data analysis that undermines privacy in an unprecedented manner.

We at Freeformers share these hopes and fears. They are why we:

  • digitally upskill teams and companies, empowering staff with a mindset that drives them to continue to develop themselves in the digital space
  • deliver free training to new generations joining the workforce
  • champion a diverse workforce who challenge bias, discuss ethical issues in the Future of Work and use the power that AI gives us for good

One of our founders Gi Fernando recently asked me, “What would a Hippocratic Oath for developers look like?” This is the kind of thinking that needs to come naturally to educators and employers alike as AI plays an increasingly powerful role in our lives.

With great power comes great responsibility.

– Ben Parker, Spider-Man

Tomorrow

What if AI becomes smart enough to not only outperform us in many of our jobs, but to surpass human capabilities in most or all domains… including in AI design? Such AI would be capable of recursive self-improvement, potentially leading very rapidly to a machine superintelligence vastly more intelligent than the smartest humans.

Humans are at the top of the food chain because we’re the smartest creatures on Earth. For now.

Would we be able to control this superintelligence? If not, what will it do? And imagine the power of a superintelligence embedded into an extensive network of internet-enabled devices… vehicles, pacemakers, trading systems, autonomous weapons… Many experts believe that human-level AI could be developed in the coming decades. If we haven’t managed to program human values into AI by that point the results could be catastrophic.

The challenge we potentially face is not the one Hollywood would have us believe we face. The danger is not so much that AI will “turn on us”; the danger is that it will be indifferent to us. If we fail to codify our ethics with sufficient precision, a powerful superintelligence could commit atrocities in the pursuit of its programmed goals. The classic example is that of the paperclip maximiser: an AI programmed to maximise the production of paperclips which then converts everything in its wake — including human organisms — into paperclips. Or imagine an AI programmed to “stop all wars” which then proceeds to achieve this by wiping out the human race (dead people can’t fight).

An AI genie might be extremely effective at giving us what we ask for, but not what we really want. How the goals of a powerful superintelligence are defined could be the difference between a world rid of poverty and disease and a world rid of us. So we need to be able to specify what humans value very precisely… something philosophers have tried and failed to do for thousands of years.

Existential risk

So back to my original question: Is AI the greatest social issue of our time? Arguably the greatest social issues of our time are those that pose an existential risk i.e. that threaten the very existence of humanity. At a recent talk, Skype co-founder Jaan Tallinn paraphrased philosopher Derek Parfit:

100% of humanity dying is a lot worse than 90%…and the difference is not 10%.

The reason is that 100% of humanity dying prevents the existence of an unimaginably large number of future generations from ever existing. The harm done by preventing countless generations from ever existing — whether through catastrophic climate change, nuclear war or unsafe AI — can arguably render any existential risk a trump card when it comes to our greatest social issues because of the sheer numbers of potential lives at risk.

Sanity Check

It’s easy to dismiss these worries as far-fetched nonsense. I certainly did when I first heard them in 2009. But this is no longer a fringe concern. Alongside Stephen Hawking, the importance of the potential for powerful superintelligence has been stressed by luminaries from Elon Musk:

If you were a hedge fund or private equity fund and you said, ‘Well, all I want my AI to do is maximize the value of my portfolio,’ then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war.

…to Bill Gates:

I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

…from Tim Urban:

…to Sam Harris:

However, many people in the tech community still consider such worries about superintelligence overblown. For example, at this year’s Web Summit I attended a session called “Farewell, the age of human supremacy” in which both panelists dismissed concerns about the dangers of machine superintelligence. One of those panelists was Andrew McAfee, who is fond of the comparison between worrying about dangerous AI and worrying about overpopulation on Mars.

But the threat with AI is astronomically greater (remember the trump card?) and the scenario in which things progress too fast for us to keep up is much more plausible. AI researcher Stuart Russell reminds us when discussing the possibility of “species-ending” AI,

the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.

I was pleased to see McAfee challenged by chair Izabella Kaminska at Web Summit: “Are you saying that if it’s difficult to get consensus on ethical guidelines for AI then we shouldn’t bother?” If a task is important enough, terribly challenging though it may be, perhaps we have no choice but to try.

Perhaps, though, what’s really going on here is a lack of imagination. Species-ending AI couldn’t really happen, could it? Not in the real world? AI researcher Eliezer Yudkowsky discusses this in the context of the recent US election and points out that there are no “nebulous forces” stopping things from getting much, much worse in the world. It takes a lot of work to keep things relatively stable, and a powerful disruptive force like a superintelligence can do a lot of damage. We’ve forgotten that in history countless times before and we’re in danger of forgetting it again.

What now?

If we take a step back and think on a long-term, global scale, we can see that “it’s robots, not immigrants, that are stealing jobs”. We need to do what we can to keep everyone relevant in the Future of Work, at perhaps a faster pace than ever before, and we need to prepare a backup plan for if we can’t keep up.

We also need to make sure that key decision-makers and the tech community are engaging with the issues explored in this article. Ethics should be a core part of any AI-related curriculum and influential figures in both the private and public sector should recognise the duty they have to use AI responsibly.

And if the threat of superintelligence feels overwhelming, don’t lose hope. A number of groups have formed recently to tackle the challenge of keeping any machine superintelligence human-friendly, such as the Partnership on AI to benefit people and society. One member of the Partnership — AlphaGo creators DeepMind — have hired three AI safety experts in recent weeks. The Executive Director of another of these groups — the Centre for the Study of Existential Risk — reminded us last month that existential risk was just “one of the funny interests of a couple of academics a few years ago”. It’s exciting to see things moving in the right direction. However, if you don’t think they are moving fast enough, Cari Tuna’s Open Philanthropy Project makes a compelling case for such groups making outstanding philanthropic opportunities. The Future of Life Institute also lists various other ways to contribute to a solution: volunteering, outreach, careers and research.

If we manage to overcome these challenges, the Fourth Industrial Revolution has the potential for incredible advancements in everything from health to education to sustainability. Future generations may look back on pre-AI civilisation the way we now look back on the dark ages.

So is Stephen Hawking right? Is AI likely to be either the best or worst thing ever to happen to humanity? Very possibly. And I’m confident that there’s at least huge value in getting it right.

--

--