Existential Risk and Human (Bad) Attitude

How we’re risking the future of humanity, and
why we’re morally obliged to think about it more carefully.

IdeasAtTheHouse
11 min readAug 20, 2014

By Dean Rickles
Illustration by Daniel Gray

Risk is everywhere. Any action contains risk when the outcome is uncertain, ranging from the negligible through to the existential — that is, the risk of an event that threatens the existence of the human race as a whole; a global catastrophic risk that would literally cause humanity not to exist (or, at least, that would push it to the brink of existence). These risks to humankind branch into two broad kinds: self-induced (such as runaway biological warfare) and non-self-induced (such as a catastrophic asteroid). In either case the outcome is devastating, but only one of these is within our control. Still, in order to control it, we need to carefully consider how we think about things such as risk and uncertainty and modify our behavior accordingly.

We evaluate risks using “probability calculus”: weighing up the odds of various kinds of outcome. In general, high risk events are treated more seriously than low risk events. This means, somewhat counter-intuitively, that events that events that are low risk but are nonetheless capable of completely or near-completely wiping out humanity are deemed low-priority. We focus on the risk and not the outcome. They are viewed like winning one of the big lotteries. The problem is, while it is highly unlikely that some specific individual will win, people do win those lotteries.

The events in question are not the kind of things with which we are acquainted, quite obviously — had an event posing an existential risk occurred, we would not be here to talk about the odds! This makes assigning numbers for existential risks difficult, and also makes them hard to seriously envisage — another flaw in our ability to properly consider them. But we need to try, no matter how abstract or low-probability they are. If we are dealing with the risk of extinction of the entire human race, then even unlikely events must be viewed as important.

Humans are very bad when it comes to dealing with probability. As Daniel Kahneman has pointed out in his book Thinking Fast and Thinking Slow, even professional statisticians can make basic mistakes in probabilistic reasoning, getting tricked up by ‘brain bugs’ in experimental situations. Consider the following: Julian is a nerdy-looking, bespectacled American fellow who prefers his own company to that of others, and likes quiet environments. Is he more likely to be a librarian or a farmer? Most people leap to the conclusion that he’s more likely to be a librarian since these details more closely match a stereotypical librarian. But even so, statistically speaking, he’s far more likely to be a farmer: the number of career paths to being a librarian are dwarfed by the number of career paths to being a farmer, but we overvalue the image in our head and undervalue these statistics.

We’ve seen the effects of the radical underestimation of risks in the finance and business sector, where there is much over-confidence despite relatively little data. This illusion concerning the relevance and strength of small sample sizes (‘the law of small numbers’) combined with illusions concerning the limits of one’s knowledge and of one’s ability to control situations to create an environment of excessive risk-taking. Add to that issues of “moral hazard” (which we’ll return to later) and the fact that aggressive personality types have been shown to be more susceptible to such biases and it was something like a perfect storm for dangerous over-confidence.

These persistent logical ticks make getting people to take existential risks seriously a tricky prospect. It’s the same reason it’s so hard to get people to act right now to combat global warming: they theoretically know that their actions are causing damage, but various biases towards the present lead to a temporal myopia preventing the necessary behaviour modification.

This points to a crucial element of existential risks: they concern future events. Time is an integral component of any discussion of existential risk. The more uncertain or distant the event, the more likely it is to be devalued — and with it, the concerns of any future people that might be affected. There is a kind of psychological event-horizon that separates things we care about from those we don’t. For instance, it won’t be surprising to anyone with a ‘to do’ list that we struggle to work through it by order of priority instead of, say, simplicity. Eliezer Yudkowsky quips that “if the Earth is destroyed, it will probably be by mistake.” I’d say “if the Earth is destroyed, it will probably be through procrastination.”

So we are bad at thinking seriously about events with low probabilities, we are bad at thinking seriously about events that occur further in the future, and we are bad at thinking seriously about events that we struggle to picture — say, humanity-destroying events that have never occurred. Existential risks touch on all three.

I refer to this as a “bad attitude” in the title, but it’s really a case of a bad fit with the modern world. We are still walking around with more or less the same kinds of brains as our very distant ancestors from a couple of hundred thousand years ago, whose daily experience involved a continual struggle for survival. Our brains evolved in an environment when it made complete sense to prioritise ‘thinking in the here and now,’ and have not progressed in step with the evolution of scientific understanding and, along with this, our impact on the world. Though you might think of yourself as a rather sophisticated creature, as you sit sipping your latte whilst reading this on a smartphone, your brain is virtually identical to Ug’s. We don’t have time to let our primitive brains catch up — we need new methods to help them along. �

This feature of privileging the present (and present payoffs) is known in the trade as hyperbolic delay discounting. We might think of each brain as having a finite balance of time and energy at any one moment, so that it must decide what to spend it on. Many will choose to spend it on actions that seem beneficial now or in the near-future, where we can imagine the outcome happening while we are still ‘ourselves’. Even in our own life, we will often screw over our future selves, taking money and health from them for the benefit of our present self. This might be as simple as having an overly rich desert now or drinking too much alcohol in the certain knowledge that your poor future self will suffer the consequences. But it could be as complex as keeping your office cooler than it really needs to be in the knowledge (though more uncertain) that your future self will suffer the consequences of energy problems and more extreme climatic events. Humans are often scorned for acting on ‘self-interest’. That statement should be modified to indicate that it’s usually only the immediate self that’s privileged (at the expense of a future self).

Cartoon by Ben Juers

No wonder we have difficulty in acting now for the future. But if the human attitude is so bad that individuals face their own version of the tragedy of the commons (competing with and taking resources from their own future selves: not just every man for themselves, but every self for themselves!), then what hope does humanity have to avoid self-induced existential risks of the kind related to such cognitive biases?

There are actually already certain methods to battle this tendency, for example by having imposed pensions, whereby money is ‘forcibly’ taken and put aside for your future self. How many of us would have the will to do this otherwise? How many times would we eat into this fund for some present reward if it were made too easy to withdraw funds?

The reward aspect (benefit or pleasure now) plays a extremely important role in our attitude towards risk and time, and it holds the key to at least partially counteracting some of the self-induced existential risks stemming from delay discounting. There are two avenues to consider: the ‘soft’ approach and the ‘hard’ approach. The ‘soft’ approach would focus on forging a more active link between people and their future selves, by a kind of cognitive behaviour therapy — getting people to be proud of their abstinence, focusing on the positive consequences for their future selves, and recalling such consequences for the next time abstinence or self-control is demanded. Alternatively, for those that can’t eliminate their temporal myopia, a programme of immediate reward for positive actions (feedback) could be implemented — in much the same way that redeemable vouchers are sometimes given to abstaining drug addicts as an incentive to alter their negative behaviour.

These are perhaps overly idealistic (and certainly too simplistic as they stand), in which case we might try a ‘hard’ approach. It’s not unlikely that the same mechanisms lying behind addictive behaviour is also behind the irrational behaviours associated with procrastination — that is, the immediate (counterproductive) response (not doing the work one is supposed to be doing), triggers the same kind of reward that taking a drug does for an addict. If we can manipulate this neurocircuitry then we might be able to isolate the mechanism responsible for (some) human self-destructiveness and thereby find a way to reduce a cluster of existential risks. It’s already possible to locate specific genetic markers for susceptibility to instant-gratification type thinking, just as we can locate genetic markers responsible for addictive behaviours. Experiments have already been performed on mice to knock out specific receptors believed to be implicated in addiction, leading to a reduction in addictive, self-destructive behaviours. If we are willing to drug children on a large scale with attention-focusing drugs, then I see no reason not to do the same with drugs that eliminate these worse features.

Such measures may seem rather extreme, but they’re reasonable to consider in cases where we are aware of self-induced existential risk caused by the aggregated effect our our individual behaviour (guided by cognitive biases of the kind described). In such cases, where the existential risk realised, then our not modifying that behaviour to minimise the risk amounts to a kind of murder of future generations should that risk be realised. It is on a par with poisoning a river upstream, without consideration of those downstream.

On the other side of self-induced existential risks are those caused by our being (overly) enthusiastic in trying to bring about a better future. Think artificial intelligence [AI], molecular nanotechnology, transhumanist biotechnological modifications, and so on. These newly emerging technologies are intended to increase humanity’s thriving in a proactive way, and so involve the very opposite of procrastination. But taken to extremes, this attitude can be just as damaging (and more rapid, catching us off guard). Consider the Castle Bravo test in 1954, involving the detonation of the largest ever thermonuclear device. The energy was predicted to be 5 megatons, but turned out to be 15 megatons. The designers had missed out an entire kind of reaction from their calculations which led to the radically increased blast energy. Since the earliest days of the atomic bomb project there were worries about a similar ‘runaway’ process burning out the entire atmosphere.

One might object that the bomb was only ever a destructive invention (or as part of the MAD strategy: ‘mutually assured destruction’), whereas these new technologies are creative ventures that might even be able to protect against other existential risks. This is true, but that doesn’t eliminate the potential for large scale runaway effects of equal or greater destructive force. In the case of AI one can easily imagine (as the statistician I. J. Good did in 1965) an “intelligence explosion” (called ‘the singularity’ by Kurzweil) in which an artificial intelligence makes ever faster and greater improvements to itself, leaving humans behind as mere pond life. It is clear that even before the singularity, the behaviour of machines of greater (or even slightly lower) intelligence than humans will not be predictable, and therefore capable of causing extreme damage given the networked nature of our world.

Hence, ‘concrastadors,’ eager to advance knowledge and humanity, can also suffer from a failure to look at the far-reaching consequences (and uncertainties) before they act. When one is dealing with novel technologies with new uncertainties (unknown unknowns), miscalculations are more likely — here we can agree with Yudkowsky’s remarks about destroying the world “by mistake.”

Dealing with existential risks is a tightrope walk. Both action and inaction come with existential risks — risks that lie in an almost perfect blind spot of our brains. We need to motivate the masses to act to change the future, while also balancing our exuberance for change with caution in how we assess likelihood for self-destruction. We are dealing with a very long timeline here. There are many, many balls in the lottery machine, but there are also many draws. An existential risk will eventually come along, but what’s important for us here and now is that the event that does finally wipe out humanity was not self-induced. We have a moral obligation to minimise those risks if we can, much as we have a moral obligation to someone in need of our help in the here and now: future people (selves) will be people (selves) regardless of whether they don’t yet exist.

Dean Rickles is an Associate Professor of History and Philosophy of Science, ARC Future Fellow and Co-Director of the Centre for Time at University of Sydney. Rickles has published multiple books, including A Brief History of String Theory: From Dual Models of M-Theory (Springer, 2014).

--

--

IdeasAtTheHouse

A melting pot of stimulating conversation and provocative debate. Presented by Sydney Opera House.