Longtermism is repackaged utilitarianism and just as bad

What Fyodor Dostoyevsky has to teach us about allowing present suffering for future harmony

Tim Andersen, Ph.D.
The Infinite Universe
7 min readAug 14, 2021

--

Longtermism, if you aren’t familiar with the term, is the philosophy, promoted by philosopher Nick Bostrom of Oxford University, that our primary ethical obligation as a species is to ensure the post-human future for countless sentient beings. Thus, all moral questions are reduced to existential risk — what will ensure that this post-human future comes about.

If you aren’t familiar with Bostrom’s work, he is also responsible for the Bayesian (probabilistic) argument that we are all living in a computer simulation. I don’t think much of this argument either, but at least it didn’t have powerful moral implications. Longtermism does.

Longtermism is part of Bostrom’s ethics which he calls effective altruism. Sadly, effective altruism is a special case of utilitarianism — the idea that right and wrong are determined by whatever does the most people the greatest good.

According to Bostrom’s predictions, humanity, if it survives the present epoch, will go on to a post-human future where conscious beings live lives of plenty and pleasure within elaborate computer simulations. If we successfully colonize our local cluster of the…

--

--