ARE THERE GENUINELY ‘EXISTENTIAL’ THREATS?

By Martin John Rees,
Cosmologist and Astrophysicist

Martin Rees asks if society is paying enough attention to potentially catastrophic threats that could destroy it. Even if some of these threats have miniscule probabilities of eventuating, he contends that we ignore them at our peril.

Ever since the invention of thermonuclear weapons, we’ve faced the risk of human-induced devastation on a global scale and in our interconnected world we are vulnerable to the downside of increasingly powerful 21st century technologies. We will never be fully secure against bio error and bioterror. Society could be dealt shattering blows by misapplication of technology that exists already, or that we can confidently expect within the next 20 years. These threats could be devastating, but would be unlikely to wipe us all out. But are there conceivable events that could threaten the entire Earth, and snuff out all humans — or even all life-forms? Promethean concerns of this kind were raised by scientists working on the atomic bomb project during the Second World War. Could we be absolutely sure that a nuclear explosion wouldn’t ignite all the world’s atmosphere or oceans? Before the first bomb test in New Mexico, the great physicist Hans Bethe and two colleagues addressed this issue — they convinced themselves that there was a large safety factor. We now know for certain that a single nuclear weapon, devastating though it is, can’t trigger a nuclear chain reaction that would utterly destroy the Earth or its atmosphere. But what about even more extreme experiments? Physicists were (in my view quite rightly) pressured by the media to address the speculative ‘existential risks’ that could be triggered by powerful accelerators that generate unprecedented concentrations of energy. Could physicists unwittingly convert the entire Earth into particles called ‘strangelets‘ — or, even worse, trigger a ‘phase transition’ that would rip apart the fabric of space itself? Fortunately, reassurance could be offered. Indeed I was one of those who wrote papers pointing out that cosmic ray particles in the Galaxy crash into other particles with much higher energies than achieved in accelerators — but haven’t ripped space apart. And cosmic rays have penetrated white dwarf and neutron stars without triggering their conversion into ‘strangelets’.

Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people — and indeed an open ended post-human future.

But physicists should surely be circumspect and precautionary about carrying out experiments that generate conditions with no precedent even in the cosmos — just as biologists should avoid the release of potentially-devastating genetically-modified pathogens. So how risk-averse should we be? Some would argue that odds of 10 million to one against a global disaster would be good enough, because that is below the chance that, within the next year, an asteroid large enough to cause global devastation will hit the Earth. This is like arguing that the extra carcinogenic effects of artificial radiation is acceptable if it doesn’t so much as double the risk from natural radiation. But to some, even this limit may not seem stringent enough. We may become resigned to a natural risk (like asteroids or natural pollutants) that we can’t do much about, but that doesn’t mean that we should acquiesce in an extra avoidable risk of the same magnitude. Designers of nuclear power-stations have to convince regulators that the probability of a meltdown is less than one in a million per year. Applying the same standards, if there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion — even one in a trillion — before sanctioning such an experiment. We may offer these odds against the Sun not rising tomorrow, or against a fair die giving 100 sixes in a row; but a scientist might seem overpresumptuous to place such extreme confidence in any theories about what happens when atoms are smashed together with unprecedented energy. If a congressional committee asked: ‘Are you really claiming that there’s less than one chance in a billion that you’re wrong?’ I’d feel uncomfortable saying yes. But on the other hand, if you ask: “Could such an experiment reveal a transformative discovery that — for instance — provided a new source of energy for the world?” I’d again offer high odds against it. The issue is then the relative probability of these two unlikely events — one hugely beneficial, the other catastrophic. Innovation is always risky, but if we don’t take these risks we may forgo disproportionate benefits. Undiluted application of the ‘precautionary principle’ has a manifest downside. As Freeman Dyson argued in an eloquent essay, there is ‘the hidden cost of saying no’. Also, the priority that we should assign to avoiding truly existential disasters, even when their probability seems infinitesimal, depends on the following ethical question posed by Oxford philosopher Derek Parfit. Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people — and indeed an open ended post-human future. Especially if you accept the latter viewpoint, you’ll agree that existential catastrophes — even if you’d bet a billion to one against them — deserve more attention than they’re getting. That’s why some of us in Cambridge — both natural and social scientists — are setting up a research program to compile a more complete register of extreme risks. These include improbable-seeming ‘existential’ risks and to assess how to enhance resilience against the more credible ones. Moreover, we shouldn’t be complacent that all such probabilities are miniscule. And we have zero grounds for confidence that we can survive the worst that future technologies could bring in their wake. Some scenarios that have been envisaged may indeed be science fiction; but others may be disquietingly real. Technology brings with it great hopes, but also great fears. We mustn’t forget an important maxim: the unfamiliar is not the same as the improbable.


Originally published at www.ethics.org.au.

Show your support

Clapping shows how much you appreciated The Ethics Centre’s story.