The Beneficial AI Movement

Tim Dutton
9 min readJan 25, 2018

--

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it…we had better be quite sure that the purpose put into the machine is the purpose which we really desire.

Norbert Wiener, 1960

The main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours.

Max Tegmark, 2017

Introduction

Ever since Alan Turing first posed the question, “Can machines think?” AI researchers and theorists have been infatuated with the possibility of machines becoming more intelligent than humans. Needless to say, the literature on the topic is rife with debate. While the perceived usefulness of AI has fluctuated greatly over the years, interest in the topic has recently undergone a major resurgence: advances in the field of deep learning have prompted a new wave of enthusiasm, funding, and technological breakthroughs. While several decades old, multi-layered neural networks only became viable in the past few years due to the availability of larger data sets, better algorithms, and faster computers.

In this light, neither the technique nor even the economic, philosophical, and technological debates surrounding AI are necessarily new; however, one aspect of the debate, which Nick Bostrom calls the “organizational side,” has changed. While there are still many areas of disagreement (e.g. basic definitions of intelligence and consciousness; techno-skeptics vs digital utopians; how to balance privacy and accountability; etc.), the most notable AI researchers and thought leaders now belong to the same “beneficial AI movement.” In an open letter signed by over 8,000 people, including AI pioneers such as Geoffrey Hinton, Stuart Russell, and Peter Norvig, signatories endorsed a research agenda whose goal is to “deliver AI that is beneficial to society and robust in the sense that the benefits are guaranteed: our AI systems must do what we want them to do.”

This post explores the emerging beneficial AI movement, the role of the Future of Life Institute in it, and the group’s proposed research agenda.

The Problem: Goal Misalignment

In the Disney movie Fantasia: The Sorcerer’s Apprentice, Mickey Mouse, tired of carrying buckets of water to fill a cauldron, enchants a broom to do the work for him: with the help of magic, he commands the broom to carry the buckets itself and fill the cauldron. Satisfied with his work, Mickey falls asleep and dreams of becoming a powerful wizard. When he wakes, however, he finds the room flooded: despite the cauldron overflowing with water, the broom does not stop working. Mickey tries to stop the broom by cutting it with an axe, but the broom only multiplies and increases its speed. When all seems lost — Mickey at this point is trapped in a whirlpool, while an army of brooms is unwaveringly filling the cauldron — the Sorcerer appears, conjures a spell, and stops the flooding.

In a 2017 talk at Google, Nate Soares, Executive Director of the Machine Intelligence Research Institute, sees this story as the perfect metaphor for the problem of goal misalignment in AI. Soares explains how, in lieu of magic, Mickey could program the broom to carry the buckets of water and fill the cauldron. To do so, he would likely use an objective function where the broom scores 1 point if the cauldron is full and 0 if empty and programs it to maximize its expected utility. Why then does the broom overflow the cauldron? The objective function that Mickey gave the broom was not the objective function that he intended: Mickey not only left out how flooding the floor is worse than having the cauldron empty, but forgot a host of other terms. As Soares explains, “The problem was not so much the broom gaining a mind of its own and defying Mickey. The trouble here is that the broom did exactly what it was programmed to do all too well — and Mickey didn’t understand the consequences of what he was programming.”

“The problem was not so much the broom gaining a mind of its own and defying Mickey. The trouble here is that the broom did exactly what it was programmed to do all too well — and Mickey didn’t understand the consequences of what he was programming.”

- Nate Soares

The potential of creating a superintelligent AI (ASI), let alone artificial general intelligence (AGI), has led Soares and others to argue that we need to mainstream AI safety research. In particular, more attention needs to be given to how we can create what AI safety pioneer Eliezer Yudkowsky calls “friendly AI:” AI whose goals are aligned with ours. But this task is easier said than done. As Max Tegmark argues, the goal misalignment problem has three sub-problems, none of which is currently solved: how do we make AI learn, adopt, and retain our goals (assuming of course that we, the human race, can even agree to what those goals are). A related issue is what Nick Bostrom calls the orthogonality thesis: the ultimate goal of any rational system is independent of its intelligence. In other words, although we can’t predict what the ultimate goal of an ASI will be, we can predict certain instrumental goals — such as self-preservation and resource acquisition — which may potentially lead to an ASI that causes problems for humans.

But what about the short term? We can agree that creating “friendly AI” is a valuable pursuit, but it’s not the only research that needs to be done to ensure that AI has a positive impact on society. Enter, then, the beneficial AI movement.

Future of Life Institute & The Beneficial AI Movement

Founders and the Scientific Advisory Board for the Future of Life Institute

Created in 2014, the Future of Life Institute is a non-profit organization founded on the premise that the creation of AGI this century is a real possibility and that crucial questions need to be answered as soon as possible. Its advisory board consists of several distinguished AI researchers and scientists, including Stuart Russell, Nick Bostrom, Stephen Hawking, and Frank Wilczek. They first received media attention later in 2014 when they released an op-ed in the Huffington Post calling on the AI community to think more critically about the risks associated with long-term AI development, but gained mainstream recognition in 2015 after hosting a conference in Puerto Rico called, “The Future of AI: Opportunities and Challenges.” The conference is noteworthy for several reasons. First, the attendees signed the open letter referenced in the introduction of this memo, which specified that the goal of AI should not be the creation of undirected intelligence, but beneficial intelligence. Second, the proposed AI research agenda, which will be discussed in the following section, was debated, refined, and published. And third, the conference ended with an announcement by Elon Musk that he was donating $10 million to the Future of Life Institute to promote AI safety research.

A key goal of the conference was to mainstream AI safety research both in the public’s eye and in the smaller AI research community. Max Tegmark writes in his book, Life 3.0, that the Puerto Rico conference was successful in moving towards this goal: for the first time the leading figures in AI research debated AI safety, which in turn led to a series of new publications, workshops, and conferences over the next two years. He cites the creation of the Partnership on AI and OpenAI, in addition to dozens of new reports on AI safety, as evidence of the success of the beneficial AI movement.

Beneficial AI 2017, Asilomar, California

The organization sponsored another conference in 2017 called, “Beneficial AI 2017.” The three-day conference was twice as large as its 2015 counterpart and included prominent individuals, such as Yoshua Bengio (Montreal), Stuart Russell (Berkeley), Eric Schmidt (Google), Shane Legg (DeepMind), Andrew Ng (Baidu), and many more. The key takeaway of the conference was the Asilomar AI Principles: a collection of 23 principles to ensure the development of ethical, safe AI. To set a high standard for the final list, only those that at least 90% of attendees agreed upon were included. The topics covered in the list are fairly comprehensive, but there are potential issues that need to be addressed:

- Principles 7 and 8 call for having clear transparency when an AI system causes harm or makes a judicial decision, but do not explain how (or even if) firms should be held accountable for the actions of their AI;

- Principles 10 and 11 maintain that AI systems should be aligned with human values and rights, but neither defines what “values” or “rights” mean; and

- Principle 18 states that an arms race in “lethal autonomous weapons” should be avoided, but does not mention other forms of nonlethal arms races, such as massive state investment in the commercial aspects of AI or the use of AI to manipulate the electoral process in one’s own country or another.

Research Priorities for Beneficial AI

In “Research Priorities for Robust and Beneficial Artificial Intelligence,” Stuart Russell, Daniel Dewey, and Max Tegmark examine the various questions that need to be addressed in order to ensure that AI’s benefits are widely shared and its pitfalls avoided. “This research is by necessity interdisciplinary, because it involves both society and AI,” they write. “It ranges from economics, law, and philosophy to computer security, formal methods, and, of course, various branches of AI itself.”

In the short term, they argue that research should prioritize optimizing AI’s economic impact (automation and labour market forecasting; other market disruptions; and policy for managing adverse effects); law and ethics (autonomous vehicles, machine ethics, autonomous weapons, and privacy); and computer science (questions related to verification, validity, security, and control). Particularly interesting questions include:

- What has been the historical record on jobs displaced by automation? How long before displaced workers found new jobs? Did displacement contribute to overall inequality?

- Is there anything different about the advancement of AI happening now that would lead us to expect a change from our centuries-long historical record of jobs being displaced by automation?

- What factors contribute to a ‘winner-takes-all’ dynamic of software-based industries?

- What role should computer scientists play in the law and ethics of AI development and use?

The article’s long-term research priorities only examine the future challenges in the computer science research agenda. As a result, the article fails to address the likely long-term need for global norms, policies, and institutions to ensure the beneficial development and use of advanced AI. To ensure that future iterations of the Asilomar Principles are adopted widely around the world, the movement will need to investigate long-standing political questions regarding collective action, power, and governance.

Next Steps for the Beneficial AI Movement

The use of robots in Hollywood films and TV shows like Terminator, Westworld, and Ex Machina leads viewers to believe that AI’s threat to humanity is robots spontaneously turning against their human creators. But, as the Fantasia example shows, the real threat is the creation of intelligent machines whose goals are misaligned with humanity’s. The Future of Life Institute so far has done commendable job at mainstreaming AI research, building an international community of researchers and thought leaders, and starting the norm-building process by publishing the Asilomar Principles.

The next step for the beneficial AI movement is to engage political scientists and ask questions about the global governance of AI. What are the power dynamics between different industry and research groups? Will the interests of the research community change with greater state funding? Will government intervention encourage AI research to become less transparent and accountable? What organizational principles and institutional mechanisms exist to best promote beneficial AI? What would international cooperation look like in research, regulation, and the use of AI? Will transnational efforts to regulate AI fall to the same collective action problems that have undermined global efforts to address climate change?

— — — — — — — — — — — — — — —

Tim Dutton is an AI policy researcher based in Canada. He is the founder and editor-in-chief of Politics + AI. He writes and edits articles for Politics + AI’s Medium page and provides contract work to governments and companies looking to learn about the emerging political risks and opportunities of AI. You can follow him on Twitter and connect with him on LinkedIn.

Thanks for reading! If you enjoyed the article, we would appreciate your support by clicking the clap button below or by sharing this article so others can find it.

Want to read more? Head over to Politics + AI’s publication page to to find all of our articles. You can also follow us on Twitter and Facebook or subscribe to receive our latest stories.

--

--

Tim Dutton

AI Policy Researcher | Founder and Editor-in-Chief of Politics + AI