Victoria Krakovna on the Future of Life Institute and ‘x-risk’

Brain Bar
6 min readMay 25, 2016

--

When it comes to choosing extracurricular activities, one might think a Harvard PhD candidate in statistics and machine learning might go for something that gives her brain a breather — knitting, maybe, or adult coloring, or Netflix binge-watching.

Victoria Krakovna, however, is spending her spare time to help save the world, and that’s not hyperbole.

Meet Viktoriya Krakovna at Brain Bar Budapest

Krakovna co-founded the Future of Life Institute, which is dedicated to ensuring the continued survival of humanity by mitigating technological risks. The institute dabbles in nuclear war and biotechnology, but its main thrust revolves around risks associated with human-developed artificial intelligence. That risk, Krakovna says, ranges from near-term concerns including economic impact and the development of autonomous weapons to the long-term factor of the x-risk — existential risk — associated with artificial superintelligence dispensing with its creators. Krakovna will talk about x-risk and how the Future of Life Institute is tackling it at Brain Bar Budapest 2016.

If AI-triggered x-risk sounds a little far-out, keep in mind that some very smart people rarely confused with idle dreamers are on board. Krakovna’s co-founders include MIT theoretical physicist Max Tegmark and Skype co-founder Jaan Tallinn, among others. The scientific advisory board hosts the likes of cosmologists Martin Rees and Stephen Hawking, physics Nobel Prize winners Frank Wilczek and Saul Perlmutter, philosopher Nick Bostrom, AI researchers Stuart Russell and Francesca Rossi, and the super-entrepreneur Elon Musk.

Musk’s $10 million donation last year sparked a Future of Life Institute research grant program for robust and beneficial AI. The program started funding 37 research projects in September 2015.

In advance of her talk at the festival in Budapest, Krakovna took time out to field a few questions related to the Future of Life Institute and x risk.

Q: What is x-risk and how did you get interested in it to the point of co-founding an institute dedicated to understanding and doing something about it?

A: Existential risk is the risk that an event could wipe out all human life. The idea of caring about x-risk is motivated by utilitarianism and consequentialism — if you want to benefit a large number of people, some of whom may be far away in space or in time, then one way to do so is to reduce the probability of existential risk. This is a neglected cause, since most people and institutions focus on addressing more immediate and less speculative problems.

I originally came upon this cluster of ideas through the LessWrong forum, focused on rationality and futurism. I also met some of my co-founders through the LessWrong community, and found that we had shared goals to take action on mitigating existential risk.

Get your ticket today!

Q: It’s interesting that the topic of x-risk is of interest to technically oriented folks such as yourself and Max Tegmark — and, for that matter, Elon Musk — as well as to philosophers like Nick Bostrom. What draws this diversity of thinkers, in your opinion?

A: The kinds of risks that are currently considered as potential existential risks are mostly technological — nuclear war, AI, biotechnology, etcetera. The probability of such risks arising can increase quickly due to technological breakthroughs, while the probability of, say, a large asteroid hitting Earth remains constant. As scientists, we also feel responsible to ensure that science and technology have a positive impact on the world. Part of my interest in AI risk is motivated by working on AI professionally as a PhD student in machine learning.

The issues of existential risk are interdisciplinary in nature, encompassing technical research questions, philosophical questions, policy issues and more. These different aspects attract a diversity of thinkers, and we will need to combine perspectives from different fields to address the issues effectively.

Q: Zoltan Istvan is also going to be speaking at Brain Bar Budapest. How does the work at the Future of Life Institute jibe or not jibe with Transhumanism?

A: There are many ideas we share with transhumanism, such as the importance of building a positive future, and preparing for large changes that could be brought on by technology. One difference is that transhumanism tends to focus on the positive impacts of technology, while we emphasize that both the benefits and the risks need to be deeply considered.

Q: What are some of the top AI-related risks, in your mind? How might those risks evolve over time with advancing technological development?

A: It seems likely that artificial superintelligence will be developed this century, which carries the risk of unleashing a powerful optimization process that is not necessarily optimizing for what humans want by default. Whatever objective is being pursued by a superintelligent system, this objective would likely benefit from acquiring more resources, the continued existence of the agent, and so on. The concern is not about malevolence, but about the combination of competence and an incorrectly specified model of human values or interests, resulting in potentially catastrophic unintended consequences. Human values and common sense would not be easy to specify to a program, and an agent with a sufficiently detailed model of the world is likely to resist modification of its objectives once they have been set, making correction difficult.

As technological development advances, and increasingly advanced AI systems are built, I would expect some of these issues to arise, but they would be much more problematic if the system were super-intelligent. Steve Omohundro’s thinking on basic AI drives and Nate Soares ideas on corrigibilty of AI systems explore some of these ideas in depth.

Q: How is the AI community responding to the Future of Life Institute’s work? I understand you’ve been accused of being Luddites.

A. The Luddite accusation didn’t come from the AI community, and didn’t seem worth engaging. In the AI research community, the questions of long-term AI safety have recently shifted from not being discussed at all to being a controversial subject. There are many researchers who share these concerns, and many others who are skeptical. Researchers all over this spectrum are increasingly engaging with these issues, which we think is really important for making progress on them.

Q: When do you estimate that the Future of Life Institute’s worries about AI gone awry might become an immediate concern, absent deliberate actions to ensure that AI development proceeds along a safe path?

A: I would expect around 30–50 years, with large error bars. It is not an immediate concern right now.

Q: What are the key messages you hope to share at Brain Bar Budapest?

A: The development of powerful AI systems can bring large risks as well as large benefits in the long term. There is a lot of research we can work on now to increase the chances of advanced AI having a positive impact on the world. I would also like to correct some misconceptions about AI risk that often come up in the media — superintelligence is not an immediate concern, malevolence is not the problem, and others.

Another message I’d like to impart at BBB is expressed well by a quote from AI pioneer Richard Sutton: “I don’t think people should be scared [of AI], but I do think people should be paying attention.” The Future of Life Institute is sometimes seen as “fear-mongering” about AI progress, especially by people who are only familiar with our work through the lens of alarmist media coverage. We think that society needs to prepare for the challenges brought by AI progress, and fear of AI is neither necessary nor sufficient for this to happen.

--

--