“On Armageddon” Part 1: A Threat of Utmost Concern

Thoughts on nuclear war, climate change, and AI.

If you somehow feel as if there isn’t enough sheer terror in your life, the Bulletin of the Atomic Scientists will keep your heart perpetually pounding.

Since 1947, the Bulletin has maintained the Doomsday Clock, a symbolic timepiece that measures how close humanity is to the imminent death of life as we know it. As the minute hand approaches midnight, the world comes closer to an end.

Scary stuff.

Well, the Bulletin made an update to the Doomsday Clock a few months ago, and it’s pretty terrifying. The expression of the guy on the left should explain it all:

No, your eyes aren’t tricking you: that poster really does say that we’re just two and a half minutes from midnight — inches away from worldwide doom.

If two and a half seems like an arbitrary number (it is), let’s add some context.

The Doomsday Clock was created in 1947 at the approximate beginning of the Cold War, with its minute hand initially placed at seven minutes from midnight.

In the years following its inception, tensions rose, all but evaporating the clock’s distance from twelve. By 1953, after the U.S. and Soviet Union had each tested their respective thermonuclear devices, the Doomsday Clock had reached 11:58, just two minutes from midnight.

Fortunately, in the early sixties, the Doomsday Clock actually backtracked a bit as tensions appeared to soften — that is, until 1962, when the Cuban Missile Crisis hit. The crisis was one of the most intense moments of the Cold War. For 13 days, the dropping of a nuclear weapon seemed genuinely inevitable, so inevitable that the Joint Chiefs of Staff unanimously agreed that dropping a missile on Cuba was the only way forward, and urged President Kennedy to do so.

Surely, during the Cuban Missile Crisis — one of the scariest 13-day stretches of American history — we were closer to midnight than we are now, right?

Nope. At the time of the Cuban Missile Crisis, we were a full seven minutes away from midnight. Today, we’re at just two and a half.

After fluctuating in the late 60’s and early 70’s — a hit from the Vietnam War and a boost from a series of non-proliferation agreements — the Doomsday Clock began to tick towards midnight once again. The Kashmir conflict intensified, the U.S. armed the mujahideen to fight the Soviets in Afghanistan, and in 1983, NATO was so prepared for nuclear war that it conducted a five-day exercise in Western Europe that simulated the run-up to a nuclear attack to an eerie degree of accuracy, complete with dummy warheads. This exercise led the U.S.S.R. to believe that NATO was prepping for the fallout of a nuclear first strike, prompting the Soviet state to ready its nuclear weapons for launch in response.

Needless to say, by 1984 — ironically, the same year as the title of Orwell’s classic dystopian novel — the Doomsday Clock had reached a pretty cozy proximity to the number twelve. Surely, we’re better off now than we were then, right?

Again, no. In 1984, we were three minutes away from midnight. Today, that figure is just two and a half.

That’s right. According to the Doomsday Clock, we’re in greater danger of Armageddon today than we were during both the Cuban Missile Crisis and the escalating tensions of the 1980s. That’s not a very reassuring place to be.

Our current situation, represented visually:

We are in a uniquely dangerous position at this very moment. It may be hard to conceptualize, but many human lives — potentially yours, your families’, and your friends’— may be in severe danger in the coming years.

But here’s the thing: during the Cuban Missile Crisis and the 1980s, people were really concerned about the threat of Doomsday. But today, it appears to be the farthest thing from our minds.

The New York Times graphic below displays which political issues are the most important to the average American (circa February 2017). The big hits appear to be the economy, immigration, terrorism, and vague concepts like “dissatisfaction with the government” and “unifying the country.”

What don’t you see? Climate change, nuclear war, AI, and other existential, Armageddon-inducing threats.

In this survey, issues pertaining to war (not including terrorism) accounted for a mere 4%, the environment took home only 2%, and artificial intelligence didn’t even register on the chart.

But according to the Bulletin of Atomic Scientists, these are the exact issues that have driven us to 11:57:30 p.m. — the exact issues we should really be focusing on.

A recent press release from the Bulletin left no room for interpretation: the reason that we’re just two and a half minutes from midnight is our failure “to come effectively to grips with humanity’s most pressing existential threats, nuclear weapons and climate change.”

Academic and philosopher Noam Chomsky agrees: “The most important issues to address are the truly existential threats we face: climate change and nuclear war.”

Given our proximity to midnight, we desperately need to talk about these existential threats to human existence significantly more than we currently are — and focus more of our political efforts on mitigating them.

So let’s get this vital conversation started.

The theory I’m advancing in this article is that existential threats to our existence — the likes of climate change, nuclear war, and superintelligent AI — are vastly underrepresented in the concerns of the American people, and should, in fact, be the most important issues on our minds.

If we don’t agree that solving these existential issues should be our top priority as a species, we will not have the motivation necessary to solve them fast enough. So, before I analyze the specifics of each of these threats, I’m first going to explain why addressing them should be our first priority as a species. I will do so using four arguments: critical timing, expected value, the precautionary principle, and population ethics.

Critical Timing.

I began writing this article as a direct response to several worrisome geopolitical events in the past few months: namely, our boiling tensions with Russia and North Korea, President Trump pulling out of the Paris Deal, the development of an artificial synapse, and the quadruple-hurricane catastrophe in the Gulf and the Atlantic, which was made more likely by climate change. These dangers are growing in magnitude and risk, and reciprocally, we must increase our efforts to mitigate them.

As we will soon discover, we have the opportunity to prevent (or at least quell) each these problems, but only if we start now. We are living through a golden, life-or-death window of opportunity that we may not have in the future, and the effects of the decisions we make today will be felt for generations.

We are currently at a climate change tipping point, in which warming temperatures are set to unleash a positive feedback loop of more warming. We are set to reach peak oil in the near future, which could breed deep and prolonged economic and geopolitical chaos if we are not sufficiently prepared. Tensions are boiling to dangerous, Cold War levels between the US, Russia, and North Korea, and it could easily get out of hand in the coming years. Meanwhile, the race between responsible and irresponsible AI developers is at a critical point, and if the latter edge out the former, billions of lives could be at stake.

What unifies all of these issues? They all need to be addressed immediately, before it’s too late. The decisions we make today will determine the extent of all of this — decisions we may not have the luxury of making in a short matter of time.

Expected Value.

It is perfectly understandable that the American people are more worried about issues like terrorism and missing planes than these existential threats. The former are current and hard-hitting, while the latter are futuristic and hard to envision.

Evolutionarily, we are geared to focus on the present, with a steep discount rate favoring existing concerns over future ones. That may have worked well in the hunter-gatherer era, where the threats to our survival were immediate, but we are now living in a society armed with nuclear weapons, supercomputers, and the power to manipulate the chemical makeup of our atmosphere — actions with long, drawn out consequences. So, if we are to be rational, we must let long-term thinking guide our decisions.

While modern, sensationalized threats may appear to wreak the most havoc, if you view these issues through the lens of expected value — more specifically, the expected value of deaths over the next 20 years — it will be apparent that, statistically, existential threats pose a significantly higher risk to human life.

Let’s say that combined, existential threats to our existence — climate change, AI, nuclear war, worldwide pandemic, a massive asteroid crashing into earth, you name it — pose a 5% cumulative risk of human extinction in the next 20 years (which roughly lines up with expert projections).

The specific number isn’t too important — the point is, this projection would forecast an expected value of 450 million deaths over the next two decades tied to human extinction.

(Note: Factoring in non-extinction-related deaths and animal deaths, this number would be even higher.)

Now, let’s compare this expected value of deaths to that of terrorism.

To be generous, let’s assume that somehow, despite the recent downfall of ISIS, terrorism rates remain at their current level for the next 20 years. Even with this unlikely assumption, the expected value of terrorism-induced deaths over the next two decades would be well under one million. That’s significantly less than 450 million.

The takeaway is clear: from a numerical standpoint, long term, existential threats should be of much bigger concern than sensationalized modern threats like terrorism.

The Precautionary Principle.

The precautionary principle explains that phenomena that have a relatively low probability of occurring, but would cause an extremely high level of damage if they did, should be avoided at all costs.

This perfectly describes why existential threats should be so heavily considered. Is there a greater than 50% chance that an all-out nuclear war will commence in the next two decades? Probably not. But if such an event did occur, it would likely cause more suffering than any event in human history, and that is something that we should avoid at absolutely all costs.

Is there a high chance that climate change will push mankind into all-out extinction and kill off everything that isn’t a jellyfish or a cockroach? I’d doubt it. But if such a catastrophe were to occur, it would be so overwhelmingly devastating that, despite its relatively low chance of happening, we should do everything in our power to reduce its likelihood.

Is it overwhelmingly likely that Artificial Intelligence will become superintelligent and pose a major threat to human life? I’d guess “no.” But the potential of it doing so — however improbable — is something that should be a call to action for all of us.

This is a really important realization. Based on the precautionary principle, saying that “[x] existential threat is probably not too likely to happen, therefore we should focus on other things” is a major mental miscalculation. Even if these existential threats are relatively unlikely to occur, the magnitude of their risk is so colossal that we must address them with as much attention as possible.

Population Ethics.

There seems to be something that makes this:

A lot less sad than this:

In both cases, exactly two beings died. But the second scenario feels worse, because all of mankind went extinct forever.

On first glance, it appears that from a utilitarian perspective, both of these hypothetical scenarios would be equally bad. If the same amount of people died in each case—i.e. if the amount of suffering caused was equivalent in both situations — why does the second one feel so much worse?

To answer this question, we must enter the philosophical field of population ethics. In an episode of the Waking Up Podcast, philosopher Sam Harris discussed population ethics with William MacAskill and presented what is, from what I’ve read, one of the best answers out there to why extinction is a uniquely bad thing:

“The other side of the ledger is all of the good that never gets done, all of the happiness that never gets lived as a result of that cancellation.”

What Harris is saying is that extinction is not just the death of already-living beings — it’s the death of future lives that will never be lived.

We should be working immensely hard, therefore, to prevent extinction — to ensure our future as a society, a species, and a planet.

Wait — Then Can’t We Just Colonize Mars?

Many would argue that, since extinction is such a big threat, we should set our sights on colonizing Mars to provide a “backup plan” for the Blue Marble. I find this viewpoint to be particularly problematic.

Of course, there are several significant benefits that would result from Elon Musk’s plan to colonize Mars. Like the moon landing, such a mission would add hope and inspiration to people’s daily lives, and developing the technology necessary for such a feat could greatly improve our civilizational capabilities.

But the sociocultural costs may outweigh these benefits. There is a growing sect of people who see Mars as our best shot at preventing extinction — as the silver bullet to the problems faced by humanity, as opposed to solving problems here on earth.

This use of Mars as a deflection from real issues here on earth is surging in popularity. To cite one example, the image below is something that recently showed up in my inbox from the New York Times:

With sentiments like this, it’s no surprise that so many people see colonizing Mars as humanity’s ultimate pathway to a better future, instead of focusing their efforts on creating a livable Planet Earth.

This is an inherently dangerous point of view. While Musk’s plan, if successful, may indeed reduce our chance of all-out extinction, even under incredibly ideal circumstances it would place just “a million people [on Mars] in 40 to 100 years.” That’s 0.01% of Earth’s population, and most likely a quite wealthy 0.01%.

It should go without saying that our goal isn’t to save the richest humans on earth — it’s to save as many beings as possible. And to do that, it’s imperative that we save the beautiful home planet we know and love.

We don’t want this to happen:

To truly solve the existential problems that our world faces, we’re going to need to think a lot deeper than Mars. We‘re going to need to examine these existential threats — climate change, nuclear war, and AI — from their roots, and do our best to find their solutions.

[This article is the first part of a four part series. To read Part Two, please click here.]