The fundamental problem with Longtermism

Venkateshan K
9 min readSep 6, 2021

--

The expanding universe ultimately ends in heat death (all life would have ceased well before then). Source [NASA/Wikimedia]

Longtermism represents the philosophical position that the welfare and value of future generations of humans far outstrips that of the existing population and therefore we collectively ought to devote significant time and resources to positively influence the long-term future. More specifically, the argument draws attention to the fact that there will be far greater number humans who would have lived say 500,000 years from now (as compared to the present) and assuming that their lives are a net positive, a potential catastrophic event may extinguish such a possible future. And if we accept the premise that we should focus our charitable efforts on those areas which have the greatest possible (estimated) benefits, it follows that minimizing an extinction risk event should get high priority. Furthermore, if we could reduce that risk even a little bit, the impact will be immense simply because of the large mass of humanity that the future will deliver.

The ascendancy of longtermist thinking in the effective altruism community has been pronounced in the last 5 years or so. While there are several thought-provoking arguments that are made to support this perspective, I believe that there is something fundamentally problematic with it.

This is my attempt to characterize the problem and explain why it severely undermines the central claim around longtermism. Now, it’s quite possible that there is a very strong and convincing response to this. Yet I am not aware of any discussion on this and hence this post.

Standard counter-arguments

To be clear, my objection to longtermism has little in common standard counter-arguments for which the proponents have provided a clear and adequate response.

  1. I don’t have a problem with longtermism being counterintuitive — a lot of abstract ideas and principles are counterintuitive such as the cardinality of real numbers and mathematical properties of infinity.
  2. In much the same way, I don’t quite accept the objection that the value of people who aren’t yet born do not count or should be heavily discounted in comparison to our own.
  3. I’m also not persuaded by the claim that we don’t have any obligation to generations that may come into existence hundreds of thousands of years later.
  4. I also do not in general have a problem with using expected value as a measure of assessment of an intervention or activity. At least not in the sense that I would dismiss it out of hand because it carries significant risk. Probabilistic outcomes are a fact of life and while we are all intrinsically risk-averse, the terrain of possibilities we explore should not be constrained by our biases.

Longtermists are the first to acknowledge that the chance of preventing a catastrophic existential risk such as an all-out nuclear war well into the future may be rather small, maybe 1% or even less than that. Or the probability that we can increase the chance of survival of mankind 10 million years from now may be around 0.001% (or two orders of magnitude smaller). The exact number is not very important to the central (and indisputable) claim of the longtermist that even a small positive change would be disproportionately valuable given the future generations encompasses such an immense amount of people.

Beyond expectation value: probability distribution

While expected value in and of itself is not a problem, it does not convey the full picture. To get the full picture, we need to consider the probability distribution, i.e., the likelihood of various outcomes from whatever interventions we do in the interest of longtermism. Naturally, we prefer positive outcomes such as better average welfare or happiness of the future population rather than say, living under a totalitarian regime. It is when we consider the probabilities of distinct outcomes that longtermism runs into some crucial difficulties.

To understand the importance of probability distribution, let’s start with a simple example. Imagine for a moment that you are playing a game where you will earn $200 with probability 10% and there is no loss or gain for the remaining 90%. The expected value from such a game is $200*0.1 = $20.

Would you play such a game? Perhaps yes, because even though the chance of winning any money is 1 in 10, the payoff of $200 seems worth the effort or time to you.

Now imagine a slight variant of the above, where once gain, you earn the same $200 with probability 10% but in addition, you gain $2000 with 1% probability and forfeit $2000 with 1% probability. The remaining 88% represents an outcome with no gain or loss. The expected value is still the same:

$200*0.1 + $2000*0.01 — $2000*0.1 = $20

How willing are you to play this game now? Maybe you are still inclined, but very likely you have a very different perspective of it. In the first case, you almost have nothing to lose (aside from the time and effort) but now there is considerable financial loss at stake (and large potential gain too). So even if you choose to play this game, you are certainly weighing the consequences of those possibilities and not just the expected value.

And in this case, it is perfectly rational to do so. Despite expected value being the same, you may refuse to play because the downside is too large.

Now imagine yet another variant — quite an unusual one — where you are not even aware of the different outcomes or their probabilities. Here the rules of the game depends on several arbitrary factors such as the mood of the host, the weather at the venue, the food or drink that is served, the age of participants with the possibility that the risk and reward can swing wildly due to unforeseen events well beyond your control. Yet an Oracle tells you that the expected value from playing the game is $20. Assuming the Oracle is speaking the truth, would you play? Perhaps not.

Distribution of outcomes in the longterm

Something very similar as the final variant of the game is happening with longtermism. The reality is that whatever happens several thousand years from now will depend on a plethora of events between now and then. There is great deal of uncertainty not only about the events but also their consequences. We might choose to influence some of those events but what is the chance that our actions have the desired effect given the large amount of variability about how the epoch may evolve? And even if our present actions did have the desired effect at some intermediate time-point, why do we believe that a million years from now the march of humanity will fall in line with what we desire today?

Chaos theory deals with similar ideas albeit formulated in a more mathematical way. At the heart of it is the simple idea that starting from two very similar initial conditions, the evolution of the system (a set of particles, a mathematical function, etc) may, with time, diverge exponentially. For example, if you strike the billiard ball with a little extra force at the start of a game, the scenario at the table may look radically different half hour later as compared to what it may have been otherwise. To be clear, a lot of examples of chaos theory (including the above) is not very technically sound, but the idea remains powerful and correct: with progress of time, the consequences of a minor initial change will rapidly diverge when there are a lot of intermediate unpredictable forces at play during the period in between.

Indeed, people in EA are perhaps familiar with this which is why they come up with apparent paradoxes such as why one should never leave the house.

One may be thinking that while uncertainty for sure is real with longtermism, what is the equivalent of losing $2000 in the game, i.e, how could it be that we not only not end up saving or improving people’s lives but actually make things worse. There are two ways:

  1. Following the reasoning I’ve made so far, we just cannot be sure that a seemingly positive action today will necessarily lead to a positive outcome many, many generations later.
  2. Opportunity cost: the resources spent on the long-term future could have been utilized for some near-term goals and there is every chance the overall long-term impact of those immediate concerns (such as improving access and quality of education) could be far greater, even insofar as stopping a catastrophic nuclear war is concerned. Think that’s far-fetched? It is, but no more so than the assumptions embedded in longtermism.

Mind-boggling level of future unpredictability

I want to focus more on the first point above just to drive home that this is not merely some narrow, technical, academic objection but a very real and serious problem.

Imagine that an all-out nuclear warfare were to happen sometime in the next few years. We can all agree that it would be horrific, of course. It will not only result in the untimely death of millions of people who otherwise may have possibly lived a happy and meaningful existence with high welfare, it will also decimate the future generations.

Nonetheless, if you were to turn your mind to a million years from now, and wonder what the effect of a near-term nuclear warfare may be, can you be certain that it is net negative? I am not so sure.

Contrary to what most people may think, an all-out nuclear war will not kill everyone. Indeed there is an 80000 hours podcast exactly on how to prepare for such a scenario. The survivors of such a conflagration (and there will plenty left) despite unprecedented hardships will very likely sustain human race. Not only that, there is a possibility that the future generations that emerge from such a calamity will be an enlightened one, assuming they have learned from the foolhardy human politics and tribalism that likely led to the event in the first place. In face, let us take this one step further and assume they are so much more enlightened that they no longer farm animals for food and are aghast that such a morally abhorrent practice was once regarded as acceptable. In doing so, such a future generation may be sparing the lives of over 80 billion land animals (not to mention fish and marine life) every year compared to us. If we extrapolate from such a scenario is it not reasonable to think that a million years later, this generation of humans may have transcended our present failures as a race far better precisely because of the catastrophe.Would not such a future be preferable, and besides, in the broad scope of cosmic timescales, what is one nuclear event anyway but a minor blip?

Does this suggest that we should be rather indifferent if not complacent about the possibility of a nuclear war today? Of course not! The immediate horrors of such an event can be predicted with very high certainty. And the consequences of it say 5 years on, while somewhat less predictable in terms of the details, is still overwhelmingly awful. 50 years on, while things may have reached some stability, the nuclear war would still be seen as the worst tragedy for mankind. 500 years, perhaps still so. How about 5000 years on ? 50, 000 years on? Do you see that the actual long-term consequences of the event will depend on other events, actions, decisions and scenarios that humanity will bear witnesses to during the intervening period? It’s entirely possible then that future generations have not learned anything at all and are in general more benighted than us condemning each other to horrible torture and abuse. We just can’t be sure. Therefore, both from a practical and philosophical perspective, there is a strong case to be made for striving to avert an event that we know for certain would be disastrous from the perspective of the next 100 years. Not doing so in the hope that there is a possibility of some radically different world in the long term future does not seem very defensible and is…far-fetched.

What would a persuasive case for long — termism look like?

I can certainly get behind longtermism if it can be argued that specific present day action(s) have some positive impact with some non-negligible probability and can establish bounds on the probability of unfavorable outcomes.

As a crude example, if a longtermist were to state that the expected number of lives saved (by a present day intervention or series of interventions) in say T (say 100,000 ) years is N (say 1,000,000) and that the probability of saving at least M (say 10,000) lives is 25% and that the probability of causing more deaths (or harm engendered) is less than 1%, all things considered (i.e., counter-factuals with opportunity cost), then I’ll put all objections aside.

As far as I know, there is no longtermist view that can provide evidence for a distribution of outcomes that is along the lines of what is described above.

In the absence of that, the uncertainty (as measured, statistically, say with standard deviation) of an intervention for the distant future is so much greater than the expected value that it is hard to imagine any context (personal, economic, political) where it would seem like a good idea to take decisions based on it.

--

--