Why Effective Altruism Is Bad

Lyman Stone
23 min readMay 28, 2024

--

I have recently found myself enjoying a young man online walking backwards into Calvinist Universalism by gradually conceiving of God as an ultimately simple utility monster. If this sentence makes no sense to you, fine, but I am talking about Bentham’s Bulldog. He’s a philosophy student and enthusiast of utilitarianism who appears to be gradually converting himself to some variety of Christianity, a trend I applaud, even as I find his actual philosophical views almost farcically wrongheaded and the resultant theology pretty lame. Nonetheless, he is a fascinating person to read and I have enjoyed spectating.

I am not going to write about his philosophical arguments. Instead we’re going to have some friendly beef.

He writes:

Source: https://benthams.substack.com/p/criticisms-of-effective-altruism

I will not focus on his specific arguments (which are quite brief and boil down to: I did not spot a real argument in the critiques).

Instead, let me offer some novel arguments against effective altruism.

What Is Effective Altruism?

I am a sociologist, the most correct discipline. I decline to accept the notion that effective altruism is a set of ideas, certainly not ideas about effectiveness or altruism. The reason is simple: if I started a movement called “Goodness-ism” and started claiming all good things as mine, that would be a bit silly. Goodness-ism would be an empty concept. “Effective altruism” is like that. Just a pairing of two generally nice words in order to claim ownership of generally nice things.

Instead I will define effective altruism in terms of a social body which is aphilosophical. Effective altruism is simply whatever is done, stated, and believer by people who call themselves and are called by others “effective altruists.” If those words aren’t used it’s not effective altruism. This is a very consistent standard, it is a simple standard, it is a standard we can have reasonable discussions about without getting bogged down in rivalrous notions of effectiveness or altruism. “Effective altruism” is just a placeholder name, then, for a social group.

So what is effective altruism as a social group?

To start with, it’s new. Here’s Google Ngram:

The big takeoff is 2009/2010 there. The earlier uses are coincidences of words, rarely actually a reference to the modern EA movement.

Or see Google Trends:

You can see in search terms effective altruism was effectively invented in 2010, then didn’t get real purchase until 2013/14 or so, and has exploded since 2021.

Where is there the most interest in effective altruism?

The only cities where searches for EA-related terms are prevalent enough for Google to show it are in the Bay Area and Boston:

If I aggregate to metro areas it’s a bit better, but even then here’s the metro areas:

San Francisco area is head and shoulders higher, and many California cities show up, as well as a Boston cluster.

We can also identify some effective altruist organizations. Here’s a non-exhaustive list. One central hub of this movement is Open Philanthropy, an organization that helps identify good causes and coordinate donations to them. It was founded in 2014. Here’s its published ballparks of donations allocated:

So let’s begin to figure out if effective altruism is a good thing, or not.

Is Effective Altruism Associated With Charitable Giving?

Kinda?

We know the spatial distribution of effective altruist ideas. We can also get IRS data on charitable giving. Here’s how EA Google index scores compare to IRS-reported giving as a share of AGI:

So you can see that there is a positive relationship. Reported giving rises from under 1.5% of income in zero-EA areas to 2.2% in the very most-EA areas. I cluster those high-EA areas because there are so few of them. If everywhere in America donated as generously as the most EA-ist parts of America, total charitable giving would be 26% higher.

That’s not nothing. But there are two problems here. First, it might not actually be effective altruism causing higher donations. Maybe charitable people find EA appealing or interesting as an idea. Or consider me: I dislike EA, I am also very charitable, my search history has lots of EA-terms in it because I want to know where best to donate, so I research this supposedly effective group. Maybe charitable places just get more EA-exposure, but EA isn’t causing charitableness.

We could try to settle this a couple of ways. First, we could ask: did anything else happen in charity-world around 2008–2014?

Answer: yes. In 2010, the “giving pledge” was launched by Bill Gates and Warren Buffett, an effort to convince rich people to give lots of money away before they die. This effort predates effective altruism as a social body and includes large numbers of people who still don’t identify as effective altruists. Highly munificent rich people are not synonymous with effective altruism. Bill Gates’ (former) wife is rather infamous for giving to causes EAists regard as highly inefficient uses of funds. Or consider a group like Innovations for Poverty Action: founded in 2002, they specifically research how to identify the most cost-effective ways to spend money or design policy to reduce poverty. IPA reviews are actually a huge source of data EAists rely on to allocate their funding (an EAist associated group, GiveWell, in particular has endorsed IPA many times). Doubtless many IPA staffers are EAists and EAists think well of IPA, but it’s obvious that a decade before EAism existed there was already a major effort to be “more effectively altruistic.”

I could go on. But the basic issue here is this: EAists like to claim credit for emphasizing “effective” charities. But everybody wants to give to effective charities! And even the kinds of evaluations EAists prefer were neither invented nor even popularized by EAists. Nor was the idea of formal giving pledges an EAist notion in invention or in popularization.

If all that is meant by “effective altruism” is “trying to use money in a rationally goal-oriented way,” then 1) it’s like saying you prefer good over bad or else 2) it refers to a broad trend towards metricization of charity and increasing social normalization big Big Giving by rich people, for which EAists cannot take credit and wherein they are not even necessarily the biggest players.

One way to test this would be to ask: what happened to charitable giving in the San Francisco area as it went from being basically 0% EA-ist (in 2011), to having a thriving EA community with a huge cultural impact, especially among the wealthy?

Charitable giving fell quite a lot. Now okay, I’m cheating per. For the middle tax brackets, the standard deduction rose, and as a result reported giving fell a lot even as actual giving may not have. But the fact that giving was unchanged for the very wealthiest households suggests that the rise of EAism has not been associated with greater generosity among the elite groups most exposed to it.

If we compare to a less-EA-exposed area that is in many other ways rather similar, like Austin, we see this:

Here’s the change over time:

You might think this is a sign of EA boosting giving among more modest earners. That’s unlikely. The likelier story is that Federal tax changed to reduce the benefits to reporting charitable giving, but California has an income tax and Texas doesn’t, so the marginal effect on incentives to keep records, to file, and the marginal tax price of giving was more effected in Texas than California. And for the rich people most targeted, there is an identical null effect.

So we can say:

  1. EAism has some spatial linkage with charitableness
  2. But not a temporal one
  3. And a bunch of non-EAist groups have been pushing for some similar kinds of strategies and behaviors at about the same time

That should all make us think that the rise of “effective altruism” as a social movement has had little or no effect on overall charitableness. This is my first major critique.

Effective altruists propagandize like they are uber-givers, but there isn’t actually evidence that these claims are true. Performative rhetoric seems like a likely explanation.

Do Effective Altruists Know What Kinds of Altruism Are Effective?

No.

Next, we can think about what things EAists give to.

You might imagine that a group fixated on “effective altruism” would have a high degree of concentration of giving in a small number of areas. Indeed, EAist groups tend to be hyper-focused on one or two causes, and even big groups like Open Philanthropy or GiveWell often have focus areas of especially intense work.

And yet, the list of causes EAists work on is shockingly broad for a group whose whole appeal is supposed to be re-allocating funds towards their most effective uses. Again, click the link I attached above.

EAists do everything from supporting malarian bednets (seems cool), to preventing blindness-related conditions (makes sense), to distributing vaccines (okay, I’m following), to developing vaccines in partnership with for profit entities (a bit more oblique but I see where you’re going with it), to institutional/policy interventions (contestable, but there’s a philosophical case I guess), to educational programs in rich countries (sympathetic I guess but hardly the Singer-esque “save the cheapest life” vibe), to promoting kidney transplants (noble to be sure but a huge personal cost for what seems like a modest total number of utils gained), to programs to reduce the pain experienced by shrimp in agriculture (seems… uh… oblique), to lobbying efforts to prevent AI from killing us all (lol), to space flight (what?), to more nebulous “long term risk” (i.e. “pay for PhDs to write white papers”), to other even more alternatively commendable, curious, or crazy causes. My point is not to mock the sillier programs (I’ll do that later). My point is just to question on what basis so broad a range of priorities can reasonably be considered a major gain in efficiency. Is it really the case that EAists have radically shifted our public understandings of the “effectiveness” of certain kinds of “altruism”?

Let’s focus on an area where effective altruists really are emphasizing a good area: malaria. EAists are kind of obsessed with malaria. The reason for this is good: malaria deaths are largely preventable at quite low cost. That’s paradigmatic efficiency.

So here’s a graph of malaria deaths over time:

You can see malaria deaths started declining in the mid-2000s, and especially after 2010. They have plateaued since 2016 or so.

In other words, the decline began before effective altruism, it accelerated before effective altruism, and in the years since 2015 when EAism has become truly influential, progress on malaria stopped.

I’m sorry but I don’t see evidence that having your charity gain the attention of EAists leads to improving effectiveness.

Here’s what you need to understand. Effective altruists devote absolutely enormous amounts of mental energy and research costs to program assessment, measurement of effectiveness. Those studies yield usually-conflicting results with variable effect sizes across time horizons and model specifications, and tons of different programs end up with overlapping effect estimates. That is to say, the areas where EAist style program evaluations are most compelling are areas where we don’t need them: it’s been obvious for a long time how to reduce malaria deaths, program evaluations on that front have been encouraging and marginally useful, but not gamechanging. On the other hand, in more contestable areas, EAist style program evaluations don’t really yield much clarity. It’s very rare that a program evaluation gets published finding vastly larger benefits than you’d guess from simple back-of-the-envelope guesswork, and the smaller estimates are usually because a specific intervention had first-order failure or long-run tapering, not because “actually tuberculosis isn’t that bad” or something like that. Those kinds of precise program-delivery studies are actually not an EAist specialty, but more IPA’s specialty.

My second critique, then is this: there is no evidence that the toolkit and philosophical approach EAists so loudly proclaim as morally superior actually yields any clarity, or that their involvement in global efforts is net-positive vs. similar-scale donations given through near-peer organizations.

What Makes EA Giving Different?

Okay so what is different about EAist giving? We all agree on malaria! But is malaria all EAists do?

Open Philanthropy’s 2023 report is a nice summary of EA priorities:

The first bullet point is highly conventional charitable activity. My view is everything in that category we can basically cross of the list of “unique features of effective altruism that might make us prefer it over other putatively benevolent social groups.” Everybody does that stuff. My church does that stuff!

Scientific research is also a major area of existing philanthropic work, though I’m sympathetic to the view that funding here is inefficiently low and EAists may have nudged this some. I’m willing to give them like 50% credit here.

Let’s skip farm animal welfare for a second and look at the next few: Global Aid, “Effective Altruism,” potential AI risks, biosecurity, and global catastrophic risk. These are all definitely disproportionate areas of EAist interest. If you google these topics, you will find a wildly disproportionate number of people who are EAist, or have sex at EAist orgies, or are the friends of people who have sex at EAist orgies. These really are some of the unique social features of EAism.

And they largely amount to subsidizing white collar worker wages. I’m sorry but there’s no other way to slice it: these are all jobs largely aimed at giving money to researchers, PhD-holders, university-adjacent-persons, think tanks, etc. That may be fine stuff, but the whole pitch of effective altruism is that it’s supposed to bypass a lot of the conventional nonprofit bureaucracy and its parasitism and just give money to effective charities. But as EAism as matured into a truly unique social movement, it is creating its own bureaucracy of researchers, think tanks, bureaucrats… the very things it critiqued.

You could of course say AI risk is a super big issue. I’m open to that! But surely the solution to AI risk is to invest in some drone-delivered bombs and geospatial data on computing centers! The idea that the primary solution here is going to be blog posts, white papers, podcasts, and even lobbying is just insane. If you are serious about ruinous AI risk, you cannot possibly tell me that the strategy pursued here is optimal vs. say waiting until a time when workers have all gone home and blowing up a bunch of data centers and corporate offices. In particular terrorism as a strategy may be efficient since explosives are rather cheap. To be clear I do not support a strategy of terrorism!!!! But I am questioning why AI-riskers don’t. Logically, they should.

Many of the other priorities in that list are unconventional for charities, but conventional for governments (like aid and biosecurity). Governments frequently invest in these. Perhaps they underinvest, but I’m unclear if we should regard “private philanthropists absorbing conventionally governmental responsibilities” in a positive light, to be honest. I’d like to see the program evaluation showing that this trend has good social impacts in the long run.

So a lot of the key priorities EAists identify are, we can say, basically political, and basically involve transferring money from globally privileged people to other globally privileged people.

But What About Animals?

One last priority must be noted. One of the truly most unique EAist quirks is their fixation on non-human life. You can see this all over Bentham’s Bulldog’s substack. EAists sincerely believe that there is some number of shrimp lives that is worth more than a human life.

It’s important to understand this precisely. Most of us humans regard animal suffering, especially of vertebrate mammals, with sympathy. Psychologically this isn’t a mystery: vertebrate mammals (and many other creatures but especially vertebrate mammals) have a lot of shared-in-common visible responses to pain and discomfort and so we recognize the pain of other creatures, which hacks our empathy mechanisms which are adapted for intragroup bonding among humans, and causes us to feel for the animal a similar, if perhaps not always as acute, feeling as we would for a suffering human.

It’s important to grasp that this behavior is, in evolutionary terms, an error in our programming. The mechanisms involved are entirely about intra-human dynamics (or, some argue, may also be about recognizing the signs of vulnerable prey animals or enabling better hunting). Yes humans have had domestic animals for quite a long time, but our sympathetic responses are far older than that. We developed accidental sympathies for animals and then we made friends with dogs, not vice versa.

But now we have those sympathies! And most of us humans regard people totally devoid of care for animal suffering as pretty bad people. If somebody tells you they torture dogs for fun, you’d probably think they’re not a very good person. The question is why you would make that inference. The deontological story suggests that you have some revealed or derived rule about animal treatment (perhaps you see animals as persons not to be used as means, or as near-persons), and you observe rule-breaking a sign that the person is an untrustworthy rule-breaker. The utilitarian story suggests that you observe the animal’s pain and make some judgment of its severity and possible gains from it, and if the possible gains don’t seem high enough to justify the severity, you regard the person as an unreliable assessor of the good and the useful, and so untrustworthy and unreliable. The virtue ethics story says that you observe that the person derives joy and happiness from things that do not give you joy and happiness and in fact give you great displeasure, and so you infer that they have fundamental traits and values incompatible with yours, and so cannot be treated as a trustworthy ally.

All plausible stories. My view though is the virtue ethics position is actually the correct one for most people. Mostly we dislike animal-torturers because animal-torturing causes us a lot of displeasure and discomfort and disgust, and we observe that since it doesn’t cause the animal-torturer that feeling, the animal-torturer is not One Of Us, he inhabits another moral tradition or moral world not reconcilable to ours, and so we are opposed. Virtue ethics works as an account because it explains how human moral communities actually form: mostly around personalities, social groups, kinship, etc, rather than pure ideas. Our moral functions are mostly about recognizing the group.

I’m not saying our morals should be that way, just that they are that way. Humans are group reasoners, and in the vast majority of cases will subordinate their individual moral calculations to leader effects, collective decisionmaking, or peer pressure. Far from treating this as some kind of aberrant irrational behavior at odds with our moral functions, we should take seriously the possibility that this actually is an intrinsic element of our moral function, and that all our moral beliefs are in some sense efforts to look into other peoples’ hearts and guess if they are One Of Us.

And this gets back to animals. Many of us dislike wanton cruelty to animals and favor measures to prevent it, like banning cockfighting or something. But the reason we dislike this cruelty for most of us is not actually about the theoretical notion of the severity of the animal’s pain, but the extremely obvious notion of the torturer’s perverse glee.

If a person shoots a horse in the head and laughs while doing it, we view them with justified suspicion. If a person shoots a horse in the head but is grim and dour as they do it, we are far less likely to make the same inference, and anybody who says otherwise is lying. The reason our inferences are sensitive in this regard is because we are intuitive virtue ethicists. Our affectual dispositions about actions alter the assessments others make of us and, as a result, of unrelated 3rd actions and potentialities for community.

So if the reason most of us dislike animal cruelty is because of what it reveals about the traits of people, then most of us would think things like, “I favor banning cruelty to pets, but even though shrimp feel some pain, I’m okay with eating shrimp.” Because shrimp-catchers aren’t out their cackling with satisfaction every time a shrimp dies, and we aren’t biting into the shrimp going “Ah, I’m so glad this animal suffered.” We can infer that the shrimp-catcher or the shrimp-eater is not engaging in behavior that reveal him or her to be Not One Of Us. These behaviors are not revealing any particularly unusual lack of sympathy.

But EAists disagree. EAists will make a simple argument. Shrimp suffer. Suffering is bad. Mathematically, even if animal suffering “counts” for even a small fraction of what human suffering “counts” for, factory farms in particular must generate suffering on a scale far exceeding all the suffering of humanity combined. As a result, it makes sense to redirect funds away from human-welfare projects towards shrimp-welfare projects, or cow-welfare, or chicken-welfare, etc.

Here, the EAist reveals a fundamentally different moral system. Not, mind you, the classical vegetarian one: “The shrimp suffers, therefore I will not eat shrimp,” but rather, “The shrimp suffers, therefore I will donate to alleviate shrimp suffering instead of human suffering.” Remember, we already established that EAism probably doesn’t boost total giving much, so it really is tradeoffs here.

The EAist faces the trolley problem every morning, and tries to weigh the costs and benefits of the two sides. Whereas, the rest of us simply do not accept that there is actually a trolley problem: the moral weight of animal suffering is not in the animal but in the person and in particular what their behaviors tell us about how they will treat other people, and most particularly how they will treat us and others like us. The fact omnivorosity has been the overwhelming norm in every scaled-up human society ever documented reveals that there is a human universal of making moral behavior mostly about humans. The exceptions, like Jains, have never been majorities of any society and when they have accrued social scale have usually done so by abandoning the most demanding interpretations of their beliefs. The EAist commitment to animal welfare then is not similar to personal commitments to veganism, which do not implicate human welfare but simply save animal lives. Rather, the EAist commitment to animal welfare is a commitment, indeed an actual practice, of trading off human lives for animal ones.

Thus, my next critique of EAism is that it undertakes a fundamentally immoral task of encouraging people not simply to care for animals more (good!) but to trade off some human lives to get benefits for non-human animals.

(as an aside, if we encountered sentient aliens, my argument would stand the same: you should prefer humans over aliens, though sentience would greatly increase possibility for cooperation so I would hope this preference could be manifested alongside cooperativeness, in the same way I prefer my wife over all other women, yet can do cooperative things like write a white paper with women)

Whither the Soul?

We have discussed some curiosities of EAist giving. But now let us consider an absence. A huge share of charitable giving in America is religious. The argument for religious giving is simple: religious organizations presume to offer eternal benefits for immortal souls, including the souls of the very poor (or in some religions animals!), and so giving to promote those benefits is highly efficient.

If souls exist and are immortal then charities that are able to generate benefits for the immortal state are infinitely effective. The benefits of moving one soul out of hell and into heaven, if you’re into that, are clearly infinite.

Effective altruism, then, is either theocratic or atheistic, those are the only options. If you are trying to maximize welfare, then plausibly eternal welfare is by far going to outweigh any current costs. This is Bentham’s Bulldog’s actual response to the problem of evil by the way: that current suffering will pale in comparison to ultimate, eternal, infite, final good. I agree with him on that. But he has not yet worked out the next implication:

That the most important task in life is therefore to maximize the welfare of peoples’ immoral states, even if it means murdering them in their infancy so they don’t experience pain in life.

Now, you can postulate a universalistic system where all are saved and that can seem to get you out of the bind. But even then, if the best thing for somebody is eternity, and if your goal is to maximize net lifetime good-things-happening, shouldn’t you kill infants? Conceive just to abort? Wouldn’t that maximize both total and average utility in eternity?

And of course there is the justice problem: a God concerned with balancing all current evil with future good seems interested in justice (hence the effort to balance or exceed with good!). It’s unclear how such a God will then be reconciled with things like “Unrepentant Hitler going to heaven.” A world where good ultimately outweighs evil, but in particular where in eternity the outcome of temporal evil is identical to the outcome of temporal good, is problematic. The scales become non-transitive. You’ve got to have some fire to make it work, which is why there are essentially no long-lasting universalistic religious traditions. Even no-hell traditions still posit gradations-of-goodness of the next life, because moral instability today matters, and because it’s just as hard to square omnibenevolence with “Unreprentant Hitler going to heaven” as it is to square it with “Innocent baby dying.”

Thus, the effective altruist, if they allow the existence of a soul, would have to stop giving to malaria programs and start paying for missionaries of their chosen religion.

Instead, EAists don’t give to missionaries — at all, AFAIK. Religious charity isn’t their jam.

Because the obvious way out here is to posit that there is no soul. That shrimps and people differ in neurons and cells and pain tolerance but not in some deeper more qualitative way.

So one reason many of us are skeptical of EAism is because baked into its claims is a denial of the spirit: “effectiveness” if a soul exists would involve some attendance to the eternal, yet EA in fact gives zero attention to the eternal, implying souls don’t exist for EAists.

What Is the EAist Welfare Function?

I wrote a long thread a while back on the ambiguity in implied welfare functions for EAists.

The issue is EAists are committed to the idea that human welfare is interpersonally comparable, i.e. that it’s possible to say, “A 2% reduction in child abuse odds in Idaho is worth X whereas a 4% decrease in time-to-recovery from influenza in India is worth Y, and Y is greater than X.” There are metrics we can use: life years, dollars, etc. The problem is not everybody values these things the same in cross section, and individuals don’t value them the same longitudinally, and they also don’t even value them the same in expectation under various counterfactuals. I’m not just reiterating the calculation problem in utilitarianism as a philosophical issue, I’m saying EAism rests on the notion that not only can the problem be philosophically overcome, but also that it can be practically overcome, that it already has been overcome, and that they have found the best uses. You can be a good utilitarian without thinking, just get on GiveWell!

This all depends, however, on the presumption of an actually shared welfare function: yet EAists usually don’t actually tell us a welfare function. How many shrimp lives do equal a human one? Give me a number. Your value system requires you to have a number there. Let’s come up with a list of 500 bilateral trade relationships of what EAists would be willing to trade for what, and then spot how many of those lists result in non-transitivity!

Because, in fact, preferences are often idiosyncratic and non-transitive and because, in fact, many goods and bads are extremely difficult or impossible to compare, EAism ends up not able to add much to the debate. EAists cannot explain to me why Aztec human sacrifice was wrong: are you actually sure the utility gains by the Aztecs was less than the utility lost by their victims? Life years lost of the victims was pretty low given typical age and life expectancy, whereas the celebrants enjoying the show numbered in hundreds of thousands.

EAism only works if we presuppose a rather non-EAist welfare function from the beginning: maximize welfare but within a long list of basically deontological rules.

For another example, consider asymmetricality of pleasure and pain. Imagine that future pleasure is only moderately valuable, but current pain is very very dis-preferred. This is a pretty plausible model of the human mind, as it happens. You don’t have to push on that model very hard to argue that the typical human life isn’t worth living: David Benatar actually argues this, and he has also published a cute little article explaining how Peter Singer’s ideas (he’s kind of the godfather of effective altruism) very easily justify coercive sterilization of the poor by the rich. You needn’t take it too much further to come up with a model where we are effectively altruistic by killing some people.

My point isn’t that EAists are secretly homicidal. They aren’t! Every EAist I know is quite nice!

My point is that EAism isn’t why they are nice. They are EAist because they are nice people who feel they should do good, that is, they actually have a crypto-welfare-function separate from EA they decline to formally incorporate, because doing so would reveal that EA doesn’t really work as a system unless you’re already a nice guy.

Who Gets the Credit?

Where Bentham’s Bulldog is correct is a lot of the critique of EAists is personal digs.

This is because EAism as a movement is full of people who didn’t do the reading before class, showed up, had a thought they thought was original, wrote a paper explaining their grand new idea, then got upset a journal didn’t publish it on the grounds that, like, Aristotle thought of it 2,500 years ago. The other kids in class tend to dislike the kid who thinks he’s smarter than them, especially if, as it happens, he is not only not smarter, he is astronomically less reflective.

EAists pretty routinely identify thinks like IPA, or JPAL, or the Giving Pledge as the kinds of things they are for. Yet they all predate EA, are not coextensive with EA, and rather are, like EAism itself, primarily symptomatic of a more widespread metricization and systematization of charity. This metricization arose from several sources: rising super-wealth putting more money into private philanthropy allowed the sector to professionalize in new ways. The end of the cold war opened up tons of new avenues for projects and reduced sensitivity of projects. Expanded computing power and an explosion in the sheer number of researchers working in rich countries massively increased analytic capacity in numerous fields. Increasing expert awareness that official development aid had highly debatable and dubious effects led to a decades-long debate over alternatives. The rise of a highly nonreligious group of extremely high-net-worth individuals disproportionately unsympathetic to prevailing Western/American social norms (I’m speaking of techbros. for a counter example: the Saudis are not quite as into EA as Americans. The Saudis believe in souls, and put tons of money into Islamic education and proselytization. True effective altruism at work!) leading to a detachment of charities from much of the historically religious base for their work. I could go on but you get the point.

A wide range of social phenomena have led to a revolution in how charity happens. EAism is one rather late-coming part of that. It is primarily defined not by exceptional cleverness in finding good projects or lack of bureaucracy, but basically by its commitments to future-risks-requiring-white-collar-researchers, its commitment to enough-shrimps-are-worth-more-than-people, its commitment to humans-don’t-have-souls, and its commitment to not-stating-your-welfare-function (unlike e.g. religious charities which state theirs very clearly). That a group with such idiosyncratic uniquenesses would turn around and so often seem to be claiming to be almost single-handedly responsible for revolutionizing charity around the world and saving bajillions of lives — yeah, it’s irritating, especially for Christian-informed charities who tend to have a norm against excessive “doing good so that others may see.” If EAists are upset they don’t get good faith responses, then they should stop acting in bad faith: your modestly sized movement can claim credit for not very much, and in its areas of greatest social prevalence has had little observable impact on behavior. The charities you support include many awesome ones and many odd ones, just like every other charitable movement. Admit you’re not special and you’re muddling through like everybody else, and then we can be friends again.

--

--

Lyman Stone

Global cotton economist. Migration blogger. Proud Kentuckian. Advisor at Demographic Intelligence. Senior Contributor at The Federalist.