Steering Clear of Catastrophe

Casper Skovgaard Petersen
FARSIGHT
Published in
13 min readNov 12, 2019

At the Centre for the Study of Existential Risk in Cambridge, top researchers study the global threats that can wipe out civilisation. On the occasion of his new book, we interviewed the centre’s co-founder Martin Rees, about existential risks and the long-term future of humanity.

A pandemic super-virus, a global climate collapse, or an unrestrained and malignant AI that makes humanity superfluous. All more or less likely disaster scenarios that pose a significant risk for the well-being of mankind in the long run.

In fact, one way of understanding the modern world is seeing it as controlled by risks, with all our decisions about the future made in an attempt to control and minimise the risk of something going wrong. The famous sociologist Anthony Giddens dates the breakthrough of risk thinking back to the 16th and 17th centuries when Spanish and Portuguese explorers used the term in relation to sailing into the unknown. At the time, the word ‘risk’ only contained a spatial dimension, but with the advent of capitalism, risk thinking became directed towards the future as it was used to assess possible returns or losses on investments. Since then, risk thinking has found its way into every corner of society.

We make risk assessments all the time when we think about the future — as individuals (should I invest in crypto-currencies?), as well as the societal level (how should the economic policies reflect the risk of a new recession within the next five years?). And then there are the global existential risks that threaten everything and everyone; these are poorly understood as we have only few historical references to draw on when we attempt to assess their probability, and take adequate precautions.

Luckily there is a research centre in Cambridge with the purpose of surveying these kinds of risks and preparing the world for them. The Centre for the Study of Existential Risk (CSER) contains specialists, technologists, and scientists from a broad range of disciplines, and the centre’s external advisors include, among others, the super-entrepreneur Elon Musk, the futurist Nick Bostrøm, the geneticist George Church, and (until recently) the late astrophysicist Stephen Hawking.

The co-founder of CSER, Martin John Rees, is a professor of cosmology and astrophysics, a member of the British House of Lords, Baron of Ludlow, and a former president of the Royal Society, the world’s oldest national scientific institution. He founded CSER in 2012, together with Huw Price (professor of Philosophy at Cambridge), and Jaan Tallinn (co-founder of Skype). Rees has a new book out, On The Future: Prospects for Humanity — a brief, but dense treatment of the great challenges and possibilities that will shape our future, according to the British professor. Within its mere 272 pages, the book covers vast themes such as population growth, space travel of the future, bio and cyber technology, robots and artificial intelligence. Also, a good part of the book is dedicated to the concerns of professor Rees and the other CSER researchers — the existential risks that threaten the future of humanity and civilisation. The researchers at CSER have divided this category of risks into four areas: Extreme technological risks, global catastrophic biological risks, extreme risks and the global environment, and risks from artificial intelligence.

We interviewed Rees about the global risk landscape, and where he believes us to be heading.

Why did you decide to start CSER together with Huw Price and Jaan Tallinn?

“I think all of us felt that, whereas there is a huge amount of study on more conventional risks — carcinogenic food, low radiation dosage, plane crashes and so on — there is not enough study of the newly emergent risks which are of low probability but of extreme consequence. CSER is based in Cambridge, which is probably the number one scientific university in Europe. So, we feel we have an obligation to create more awareness of the extreme risks, with the aim of trying to distinguish between those that are science fiction and those that are realistically emergent, and to try to minimize the very serious ones.”

So, the goal of CSER is to influence and guide public understanding of which risks are real and important and which are irrational?

“Yes, I would say that is our aim. We are a small group, and there are only half a dozen groups in the world like ours that focus on extreme risks. We do it because these kinds of threats are rather under-studied. Being embedded in a major university, we can draw on expertise from different fields and use our combined knowledge to try and discriminate between threats that should be worried about and which are important, and those which aren’t as important. Of course, experts can be wrong, but they are far more likely to offer sensible guidance than the people who rarely think about these things.”

What sparked your interest in existential risks and how to prevent them?

“I’ve always been politically engaged. I was campaigning during the Cuban missile crisis. In the 1980’s, I attended conferences where I had the privilege of meeting senior people like Joseph Rotblat and Hans Bethe who had been involved in making the atomic bomb and who both felt a special obligation to do what they could to harness the powers that they had helped unleash. They weren’t very successful in doing so, but they felt an obligation, nonetheless. I came to feel a similar obligation, both as President of the Royal Society and a member of the House of Lords, that we need to consider the social ramifications of the new technologies in development today.”

And how about your training as a cosmologist and astrophysicist? Has that shaped the way you think about our future and the risks facing us?

“Not particularly. Maybe in the sense that cosmologists and astrophysicists are perhaps more aware than the average person about the far future, because we work with enormous time-scales. As I say in my book, most educated people are aware that we are the outcome of four billion years of evolution. I suppose we have an extra perspective in that we realise that this century is very special when compared to previous ones and that, if we do things very badly, we can foreclose future potentialities.”

Extreme risks relating to the global environment is one of CSER’s main areas of research. Photo: Carmen Marchena Alonso.

We know plenty of recent examples of things ending badly; devastating pandemics like the Spanish Flu, or the several ‘close-calls’, where the world stood on the brink of nuclear war. And then there is climate changes, which brings its own set of threats. These kinds of risks are well-documented and easily understood. Other risks that you study at CSER seem more uncertain and speculative. Why, for instance, do you believe that developments in AI pose a potentially existential threat to us?

“One point I make is that we can’t predict more than 20 years ahead when it comes to technology. Some projections we can make with relative certainty — things like population increases and global warming. But when it comes to technology, we can’t predict with confidence that far ahead. The smart phone would have seemed like magic 20 years ago and no one would have predicted how fast it would spread or the impact it would have.

With that said, some scientists fear that computers may develop minds of their own and pursue their own goals that may be contrary to human wishes, or that may even treat humans as encumbrances. Some, for instance Stuart Russell at Berkeley, and Demis Hassabis of DeepMind, think that AI already needs guidelines for ‘responsible innovation’. Others, like roboticist Rodney Brooks, who created the Baxter robot and the Roomba vacuum cleaner, think these concerns are too far from realisation to be worth worrying about — they remain less anxious about AI than about real stupidity. What’s happened at our centre is that we now have a separate group called the Centre for Future Intelligence. This group tackles general issues arising from the social impact of AI.

My personal belief is that in the long run, AI does pose a risk, but in the short run I worry more about bio and cyber risks.”

How would you characterise the threat from cyber?

“Cyber is an example of how few people can cause major damage. In my book I quote a report from the US defence department corroborated recently by General David Petraeus. The report describes how a cyber-attack at the state level could shut down the electricity grid in a large part of the United States — and that this would merit a nuclear response. So, cyber threats are indeed very serious, especially when taken in connection with other threats such as nuclear.”

What about the risks relating to biotechnology?

“Misuse of biotech is another big risk because it’s very hard to enforce any regulation. We can do our best, but we can’t expect to be effective at it. Today, a single person or a small group of people can cause an effect that can cascade widely — and they don’t need a huge research facility to do so. In 2011, two research groups, one in Holland and another in Wisconsin, showed that it was surprisingly easy to make the H5N1 influenza virus both more virulent and more transmissible.

My worst nightmare would be an unbalanced ‘loner’, with biotech expertise, who believed, for instance, that there were too many humans on the planet and didn’t care who, or how many, were infected.”

How do we mitigate these kinds of risks — the ones that stem from human error, malice or poor judgement?

“As I say in my book, the global village will have its village idiots and they’ll have global range. For this reason, I believe the balance between freedom, privacy and security is going to have to shift a bit because the consequences of what can be done by error or design using these technologies are far greater today.”

the global village will have its village idiots and they’ll have global range. For this reason, I believe the balance between freedom, privacy and security is going to have to shift

How should we strike that balance? Are you in favour of regulating the use of certain technologies or limiting their spread?

“I don’t have a solution. Obviously, we must do everything possible to avoid something happening by mistake, but we can’t guard ourselves against the intentional misuse of these technologies. I think this is a serious challenge.

I also worry about the fragility of society right now. There have been catastrophes in history which, if something similar happened today, would have far greater consequences because of the level of interconnection today. In the 14th century, the Black Death killed half the inhabitants in some European towns. The surviving part of the population carried on. But if something similar happened today, once the hospitals reached their capacity, it is likely that the social fabric would break down because we are so dependent on our systems functioning.”

I think the fragility you describe is what motivates people who want to take survival after a potential societal collapse into their own hands. Recently there has been a lot of reporting on Silicon Valley hedge fund billionaires and technologists buying up real estate in New Zealand and building bunkers there to prepare for plan B in case things turn sour. What do you make of this, as someone who studies existential risks?

“What you are mentioning are extremists, often extreme libertarians. I don’t think they are the mainstream. In any case that sort of preparation is not as widespread as the development of fall-out shelters in the US in the 1950’s and 60’s when there was a very real threat of nuclear war.”

In Denmark many fallout shelters have been refurbished into rehearsing studios for musicians. Would this worry you?

“Not really, no. The reason fallout shelters are irrelevant today is that what is more likely than a nuclear war is a breakdown in society. Imagine what would happen in a major western city that had no electricity. Within a few days our cities would be uninhabitable and anarchic.”

The amount of active nuclear weapons was at its highest in 1986, reaching 70,300. In 2018 the number had been reduced to approximately 3,750 active warheads.

Isn’t the nuclear threat still very real?

“The risk of a nuclear bomb going off in the Middle East, India or Pakistan is probably higher than ever. And, of course, a third world war involving nuclear weapons would be over in a few days. But the risk of thousands of nuclear bombs going off is not as high as it was during the Cold War because there has been a scaling down of the number of weapons. When you realise that there have been situations such as the Cuban Missile Crisis where there was a 1 in 3 chance of a catastrophe that would destroy the fabric of European civilisation, as Robert McNamara later estimated, it’s clear that a Soviet takeover would have been preferable.

Of course, there is still the risk that the next nuclear standoff will not be handled as well as the ones during the Cold War were. So, the nuclear threat hasn’t gone away. To this should be added the 21st century technologies.”

Over the last decades there has been a shift in the public perception of the future from hopeful to a threatening place. The future is no longer as bright as it were in the 1950’s and 60’s when we were promised flying cars and bases on the moon. You see the shift most clearly in popular culture and science fiction which is almost always very bleak. Why do you think this is?

“I think it reflects the realisation that the stakes are higher than they have ever been. The worst that could happen now is global. There is a book by Jared Diamond, Collapse, in which he goes through examples of how and why societies have collapsed in the past. The difference is that those were all localised events. Today, it is impossible to imagine a local civilizational collapse. Such an event would almost certainly be global in scale.

I also think it has something to do with how hard it is to make realistic predictions based on the technology we have today. Manned spaceflight, people walking on Mars and supersonic airliners are all things that, 40 years ago, we thought that we would have today. And those things were possible had the investments been maintained, in the Apollo programme for instance. But it takes economic and political pressure. On the other hand, the development of the internet and smart phones have impacted the world far more and much faster than we could have imagined. And I would argue those technologies have had an overall benign effect. The same can be said of technologies that have to do with improving health.

So, there is a distinction between what can be done technologically and what actually happens. We don’t know which technologies will become widely adopted. This is true of AI and human enhancement as well.”

Your book is not a techno-utopian one. And it’s fair to say the work you do at CSER deals more with the negative aspects of technological growth than with the positives. Would you say you are optimistic that technology will solve more problems than it causes in the long run? Will it be a net gain or a net loss in terms of the global risk level?

“Up until now technology has done more good than harm. It’s clear the lives we live today are better than the previous generations — and that is largely thanks to technological developments. But I think one must keep in mind that while the new technologies have benefits, they also create a new class of risks. And to ensure that the balance remains positive as it has until now will be a big challenge.”

Your book deals with the more speculative and cosmic far future scenarios as well. Does humanity’s future lie off the Earth, on other planets?

“If I was an American, I would not support spending any money on NASA’s manned space programme. I wouldn’t support manned space programmes either if I was a European. The benefits of sending people into space have all disappeared with the advent of robotics. Robots can do what people do much more easily.

Nonetheless there are these private companies who are doing manned spaceflight. I think they are very good news. They will be able to send people who want to take risks into space just as an adventure. A 2 percent risk of failure is too great for NASA but not necessarily for a private space flight company sending out extreme sports people.”

Extreme sports?

“Yes, manned spaceflight should be for people prepared to accept high risks rather than for tourists. Looking further into the future, I think these people will be the first to redesign themselves into what will essentially become a new species. They will use all the resources of genetic technology to modify themselves and spearhead any expansion of intelligence from the Earth into wider space. They might become electronic immortal entities made for long-distance space travels.

I think we should encourage these rich people to spend money on it.”

You dedicate a few pages to the possibility (or risk) of finding alien life. What are your thoughts on this?

“If we do detect evidence of something artificial or technological in space, I think it’s far more likely it will come from something electronic rather than something made from flesh and blood.

In the cosmic scale, if there is life out there already, what happens to us humans is insignificant. But if life is unique to the Earth, then of course what happens here on Earth is of cosmic consequence. If we screw things up this century it forecloses the possibility of life spreading beyond Earth.”

It depends on what the village idiots will do?

“Yes, as I say in my book, there could be a ‘bottleneck’ at our own evolutionary state — the stage we’re at during this century, where we develop powerful technology. The long-term prognosis for ‘Earth-sourced’ life depends on whether humans survive this phase — despite vulnerability to the kinds of hazards we are currently facing.”

--

--