Against Strong Longtermism: A Response to Greaves and MacAskill

Ben Chugg
Curious
Published in
12 min readDec 18, 2020
Source: 80,000 Hours

The new moral philosophy of longtermism has staggering implications if widely adopted. In The Case for Strong Longtermism, Hilary Greaves and Will MacAskill write

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers. (pg. 1; emphasis mine)

The idea energizing this philosophy is that most of our “moral value” lies thousands, millions, or even billions of years from now, because we can expect many more humans and animals to exist in the future than right now. In the words of Greaves and MacAskill: “If humanity’s saga were a novel we would still be on the very first page.” (pg. 1)

Longtermism is causing many to question why we should be at all concerned with the near term impact of our actions. Indeed, if you are convinced by this calculus, then all current injustice, death, and suffering are little more than rounding errors in our moral calculations. Why care about parasitic worms in Africa if we can secure utopia for future generations?

Fortunately, the effective altruism (EA) community — where this philosophy is actively being promoted — has yet to take irreversible action based on these ideas. While there have been millions donated to the cause of improving the long-term future (at the time of writing the Long-Term Future Fund has received just under $4.5 million USD in total), many millions more are still funneled through GiveWell, The Life You Can Save, and Animal Charity Evaluators. Should Greaves and MacAskill prove sufficiently persuasive, however, such “near-term” efforts could vanish: “If society came to adopt these views, much of what we would prioritise in the world today would change.” (pg. 3)

This piece is a criticism of longtermism as expounded in The Case for Strong Longtermism. To date, what public attention this philosophy has received has been mostly positive. Toby Ord — a proponent of longtermism — has been on both Sam Harris’s Making Sense Podcast and Vox’s Ezra Klein Show. The Open Philanthropy Project, a multi billion dollar charitable fund, has also dedicated a focus area to this cause in the form of “risks from advanced artificial intelligence.” What push-back the idea has received has come in mostly blog form. Most commonly, the criticism revolves around the intractability objection which, while agreeing that the long-term future should dominate our moral concerns, argues we can’t have any reliable effect on it. While correct, it lets longtermism off far too lightly. It does not criticize it as a moral ideal, but rather as something good but unrealizable.

One recent essay does attempt to refute the two premises on which strong longtermism is founded. It argues that (i) the mathematics involved in their expected value calculations involving the future are fundamentally flawed — indeed, meaningless — and (ii) that we should be biased towards the present because it is the only thing we know how to reliably affect. My criticisms will build on these.

I will focus on two aspects of strong longtermism, henceforth simply longtermism. First, the underlying arguments inoculate themselves from criticism by using arbitrary assumptions on the number of future generations. Second, ignoring short-term effects destroys the means by which we make progress — moral, scientific, artistic, and otherwise. In other words, longtermism is a dangerous moral ideal because it robs us of the ability to correct our mistakes.

Since the critique may come across as somewhat harsh, it’s worth spending a moment to frame it.

Motivation

My assailment of longtermism comes from a place of deep sympathy with and general support of the ideals of effective altruism. The community has both generated and advocated many great ideas, including evaluating philanthropic efforts based on impact rather than emotional valence, acknowledging that “doing good” is a difficult resource-allocation problem, and advocating an ethical system grounded in impartiality across all sentient beings capable of suffering. Calling attention to farmed animal welfare, rigorously evaluating charities, and encouraging the privileged among us to donate our wealth, have all been hugely important initiatives. Throughout its existence, EA has rightly rejected two forms of authority which have traditionally dominated the philanthropic space: emotional and religious authority.

It has, however, succumbed to a third — mathematical authority. Firmly grounded in Bayesian epistemology, the community is losing its ability to step away from the numbers when appropriate, and has forgotten that its favourite tools — expected value calculations, Bayes theorem, and mathematical models — are precisely that: tools. They are not in and of themselves a window onto truth, and they are not always applicable. Rather than respect the limit of their scope, however, EA seems to be adopting the dogma captured by the charming epithet shut up and multiply.

EA is now at risk of adopting a bad idea; one that if fully subscribed to, I fear will lead to severe and irreversible damage — not only to the movement, but to the many people whose suffering would be willfully ignored. As will be elaborated on later, rejecting longtermism will not cause a substantial shift in current priorities; many of the prevailing causes will remain unaffected. If, however, longtermism is widely adopted and its logic taken seriously, many of EA’s current priorities would be replaced with vague and arbitrary interventions to improve the course of the long-term future.

Let’s begin by examining the kinds of reasoning used to defend the premises of longtermism.

Irrefutable Reasoning

“For the purposes of this article”, write Greaves and MacAskill,

we will generally make the quantitative assumption that there are, in expectation, at least 1 quadrillion (10¹⁵) people to come — 100,000 times as many people in the future as are alive today. This we [sic] be true if, for example, we assign at least a 1% chance to civilization continuing until the Earth is no longer habitable, using an estimate of 1 billion years’ time for that event and assuming the same per-century population as today, of approximately 10 billion people per century. (pg. 5)

This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground. The assumptions, however, are tremendously easy to change on the fly. Consequently, they’re dangerously impermeable to reason. Just as the astrologer promises us that “struggle is in our future” and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute.

Such unfalsifiable claims lead to the following sorts of conclusions:

Suppose that $1bn of well-targeted grants could reduce the probability of existential catastrophe from artificial intelligence by 0.001%. . . . Then the expected good done by [someone] contributing $10,000 to AI [artificial intelligence] safety would be equivalent . . . to one hundred thousand lives saved. (pg. 14)

Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs. Notice that if one shifted one’s credence from 0.001% to 0.00001%, donating to AI safety would still be more than twice as effective as donating to the Against Malaria Foundation (AMF) (using GiveWell’s 2020 estimates).

A reasonable retort here is that all estimates in this space necessarily include a certain amount of uncertainty. That, for example, the difference between GiveWell’s estimates and those for AI risk are a matter of degree, not of kind. This is correct — the differences are a matter of degree. But each of those degrees introduces more subjectivity and arbitrariness into the equation. Our incredulity and skepticism should rise in equal measure.

GiveWell’s estimates use real, tangible, collected data. Other studies may of course conflict with their findings, in which case we’d have work to do. Indeed, such criticism would be useful for it would force GiveWell to develop more robust estimates. Needless to say, this process is entirely different than assigning arbitrary numbers to events about which we are utterly ignorant. My credence could be that working on AI safety will reduce existential risk by 5% and yours could be 10^-19%, and there’s no way to discriminate between them. Appealing to the beliefs of experts in the field does not solve the problem. From what dataless, magical sources are their beliefs derived?

Moreover, should your credence be 10^-19% in the effectiveness of AI Safety interventions, then I can still make that intervention look arbitrarily good, simply by increasing the “expected number of humans” in the future. Indeed, in his book Superintelligence, Nick Bostrom has “estimated” that there could be 10⁶⁴ sentient beings in the future. By those lights, the expected number of lives, even with a credence of 10^-19%, is still positively astronomical.

As alluded to above, the philosophy validating the reliance on subjective probability estimates is called Bayesian epistemology. It frames the search for knowledge in terms of beliefs (which we quantify with numbers, and must update in accordance with Bayes rule, else risk rationality-apostasy!). It has imported valid statistical methods used in economics and computer science, and erroneously applied them to epistemology, the study of knowledge creation. It’s ill-defined, is based on confirmation as opposed to falsification, leads to paradoxes, and relies on the provably false probabilistic induction. In other words, it has been refuted, and yet, somehow manages to stick around (ironically, it’s precisely this aspect of Bayesianism which is so dubious: its inability to reject any hypothesis).

Bayesian epistemology unhelpfully borrows standard mathematical notation. Thus, subjective credences tend to be compared side-by-side with statistics derived from actual data, and treated as if they were equivalent. But prophecies about when AGI will take over the world — even when cloaked in advanced mathematics — are of an entirely different nature than, say, impact evaluations from randomized controlled trials. They should not be treated as equivalent.

Once one adopts Bayesianism and loses track of the different origins of various predictions, then the attempt to compare cause areas becomes a game of “who has the bigger number.” And longtermism will win this game. Every time. It becomes unavoidable because it abolishes the means by which one can disagree with its conclusion, because it can always simply use bigger numbers. But we must remind ourselves that the numbers used in longtermist calculations are not the same as those derived from actual data. We should remember that mathematics is not an oracle unto truth. It is a tool, and one that in this case is inappropriately used. There are insufficient constraints when reasoning based solely on beliefs and big numbers — it is not informative and is not in any way tethered to a real data set, or to reality. Just as we discard poor, unfalsifiable, justifications in other areas, so too should we dispense with them in moral reasoning.

The Antithesis of Moral Progress

If you wanted to implement a belief structure which justified unimaginable horrors, what sort of views would it espouse? A good starting point would be to disable our critical capacities from evaluating the consequences of our actions, most likely by appealing to some vague and distant glorious future lying in wait. And indeed, this tool has been used by many horrific ideologies in the past.

Definitely and beyond all doubt, our future or maximum program is to carry China forward to socialism and communism. Both the name of our Party and our Marxist world outlook unequivocally point to this supreme ideal of the future, a future of incomparable brightness and splendor.

— Mao Tse Tung, “On Coalition Government”. Selected Works, Vol. III, p. 282. (emphasis mine)

Of course, the parallel between longtermism and authoritarianism is a weak one, if only because longtermism has yet to be instantiated. I don’t doubt that longtermism is rooted in deep compassion for those deemed to be ignored by our current moral frameworks and political processes. Indeed, I know it is, because the EA community is filled with the most kind-hearted people I’ve ever met.

Inadvertently, however, longtermism is almost tailor-made to disable the mechanisms by which we make progress.

Progress entails solving problems and generating the knowledge to do so. Because humans are fallible and our ideas are prone to error, our solutions usually have unintended negative consequences. These, in turn, become new problems. We invent pain relief medications which facilitate an opioid epidemic. We create the internet which leads to social media addiction. We invent cars which lead to car accidents. This is not to say we would have been better off not solving problems (of course we wouldn’t), only that solutions beget new — typically less severe — problems. This is a good thing. It’s the sign of a dynamic, open society focused on implementing good ideas and correcting bad ones.

Moral progress is no different. Abstract reasoning from first principles can be useful, but it will only get you so far. No morality prior to the industrial revolution could have foreseen the need to introduce eight-hour workdays or labour laws. No one 1,000 years ago could have foreseen factory farming, child-pornography spread via the internet, or climate change. As society changes, it is crucial that we maintain the ability to constantly adapt and evolve our ethics in order to handle new situations.

The moral philosophy espoused by EA should be one focused on highlighting problems and solving them. On being open to changing our ideas for the better. On correcting our errors.

Longtermism is precisely the opposite. By “ignoring the effects contained in the first 100 (or even 1000) years,” we ignore problems with the status quo, and hamstring our efforts to create solutions. If longtermism had been adopted 100 years ago then problems like factory farming, HIV/AIDS, and Measles would have been ignored. Greaves and MacAskill argue that we should have no moral discount factor, i.e., a “zero rate of pure time preference”. I agree — but this is besides the point. While time is morally irrelevant, it is relevant for solving problems. Longtermism asks us to ignore problems now, and focus on what we believe will be the biggest problems many generations from now. Abiding by this logic would result in the stagnation of knowledge creation and progress.

It is certainly possible to accuse me of taking the phrase “ignoring the effects” too literally. Perhaps longtermists wouldn’t actually ignore the present and its problems, but their concern for it would be merely instrumental. In other words, longtermists may choose to focus on current problems, but the reason to do so is out of concern for the future.

My response is that attention is zero-sum. We are either solving current pressing problems, or wildly conjecturing what the world will look like in tens, hundreds, and thousands of years. If the focus is on current problems only, then what does the “longtermism” label mean? If, on the other hand, we’re not only focused on the present, then the critique holds to whatever extent we’re guessing about future problems and ignoring current ones. We cannot know what problems the future will hold, for they will depend on the solutions to our current problems which, by definition, have yet to be discovered. The best we can do is safe-guard our ability to make progress and to correct our mistakes.

In sum, given the need for a constantly evolving ethics, one of our most important jobs is to ensure that we can continue criticizing and correcting prevailing moral views. The focus on the long-term future, however, stops the means by which we can obtain feedback about our actions now — the only reliable way to improve our current moral theories. Moral principles, like all ideas, evolve over time according to the pressure exerted on them by criticism. The ability to criticize, then, is paramount to making progress. Disregarding current problems and suffering renders longtermism impermeable to error-correction. Thus, while the longtermist project may arise out of more compassion for sentient beings than many other dogmas, it has the same nullifying effect on our critical capacities.

What now?

We are at an unprecedented time in history: We can do something about the abundance of suffering around us. For most of the human story, our ability to eradicate poverty, cure disease, and save lives was devastatingly limited. We were hostages to our environments, our biology, and our traditions. Finally however, trusting in our creativity, we have developed powerful ideas on how to improve life. We now know of effective methods to prevent malaria, remove parasitic worms, prevent vitamin deficiencies, and provide surgery for fistula. We have the technology to produce clean-meat to reduce animal suffering. We constructed democratic institutions to protect the vulnerable and reduce conflict. These are all staggering feats of human ingenuity.

Longtermism would have us disavow this tradition of progress. We would stop solving the problems in front of us, only to focus on distant problems obscured by the impenetrable wall of time.

For what it’s worth, should the EA community abandon longtermism, I think many of its current priorities would remain unchanged; long-term causes do not yet dominate its portfolio. Causes such as helping the global poor and reducing suffering from factory farming would of course remain a priority. So too would interventions such as improving institutional decision making and reducing the threat of nuclear war and pandemics. Such causes are important because the problems exist and do not require arbitrary assumptions on the number of future people.

My goal is not necessarily to change the current focus of the EA community, but rather to criticize the beginnings of a philosophy which has the potential to upend the values which made it unique in the first place: the combination of compassion with evidence and reason. It is in danger of discarding the latter half of that equation.

Thanks to Daniel Hageman, Vaden Masrani, and Mauricio Baker for their continual feedback and criticism as this piece evolved, and to Luke Freeman, Mira Korb, Isis Kearney, Alex HT, Max Heitmann, and Maximilian Negele for their comments and suggestions on earlier drafts.

--

--

Ben Chugg
Curious
Writer for

PhD student at CMU, co-host of the Increments podcast. More havoc at benchugg.com