A year in Effective Altruism: observations and criticisms (and tofu)

Jacob Funnell
16 min readAug 14, 2016

--

Peter Singer accepts the Tofu offering to Him, to satisfy His preferences

In July 2015, I plugged ‘effective altruism Brighton’ into Google.

To my surprise, there was a Meetup group locally. I wandered along and talked about Peter Singer and wrote about him on someone’s arm and went to the beach in the moonlight.

It was a good evening.

Since then, I’ve read lots of books, articles and posts about EA, written hundreds of comments, given communications advice to a number of EA orgs, attended an EA conference, ritually sacrificed tofu to Peter Singer, and made many friends.

A year on, here are my general observations about EA–what’s surprised me, what I’m unsure of, and what I’ve found the broader EA community to be like. I’ve divided it into three parts:

Philosophy covers EA philosophy(s), including whether or not EA is a complete theory of ethics, how EA presents its more controversial conclusions and the extent to which I think EA is wedded to utilitarianism.

Community covers how I’ve found the EA community, and what it’s been like to be an EA. I try to cover how far the online world differs to EA in meatspace.

My next year in EA gives a few thoughts on what I plan to do in the coming year.

PHILOSOPHY

‘Utilitarianism’ is an extremely broad label

Coming to EA, I always thought that utilitarianism would be a pretty united thing. Not in terms of utilitarians agreeing in every moral case, just that utilitarians would be generally more united among themselves about what we should do than they would be with deontologists.

I’m not so confident about that any more. The varieties of utilitarianism are bewildering. Each assigns different weights to a number of values, including:

  • The importance of present vs future beings.
  • The importance of reducing suffering vs increasing well-being.
  • The degree to which other broader principles can be brought into play (eg whether you can mix in some rights talk, as Singer does with self-conscious persons).
  • The degree to which conscious experiences are commensurable and comparable.

These utilitarianisms also draw on a variety of different meta-ethics; from Sam Harris-type moral realism (which seems to have a fair bit of traction in EA circles) to completely anti-realist theories, among many others.

These differences in meta-ethics are most important when EAs talk about how obligatory or not EA-type acts are. The range of opinions here seems to vary a great deal — from ‘EA gives us good ideas for things you should do if you happen to care about others’, to ‘EA demonstrates that there’s a strong moral obligation to live your life as a utility producing/suffering reducing machine’.

Beyond this, what EAs feed into their utilitarian calculuses radically differs, including:

  • The number and type of sentient beings.
  • The nature and measurability of conscious experience.
  • The extent to which we can influence the future, and the methods by which we do this.

It’s as if a team of physicists were all using the equation E=MC², except that some physicists gave a value of 1,000,000,000,000,000 mph to C, and others gave a value of 0.00001 mph, and still others thought that we shouldn’t really worry about putting mass into the equation at all.

However, though the range of EA opinion really does seem to me to be about this wide, this doesn’t tell you about what the distribution of opinion is like. I don’t know how far there’s a broad consensus building between EAs.

For example, how far do most EAs think the far future is a dominant priority? How many EAs are some form of hedonic utilitarian, or lexical threshold negative utilitarian, or whatever? What percentage of EAs think insects can suffer? How many don’t really know or think about any of these questions?

I’d love to see some empirical work on this. We could see if the movement shows any overall trends over time, and if there are any patterns to be found among people in the different cause areas. It would let us be much more clear about what EAs actually believe.

EA avoids putting its controversial conclusions up-front

Some effective altruists hold views that would be strongly controversial to most people outside of EA. EA does a pretty good job of either not mentioning (or actively avoiding) these conclusions in its public-facing literature.

Examples include the moral imperative to destroy habitat in order to reduce wild animal suffering[1], the need to divert funds away from causes like poverty relief and towards artificial intelligence safety research, or the extreme triviality of aesthetic value[2].

These are the kind of conclusions which are avoided in the public-facing media like books, websites, TED talks and articles in the mainstream press.

I really get why this is – telling the wider public that ‘some EAs believe buying a chocolate bar is a case of partial homicide’ is hardly going to get many people on board, unless they already feel really, really bad each time they buy a Snickers. I also get that popularity says nothing about truth — if the general public find a proposition controversial, so what?

However, I still find this double-sidedness of EA an uncomfortable situation. Will MacAskill’s book Doing Good Better is a bestseller, but it doesn’t really hint at these more controversial or wacky-sounding conclusions in EA.

So I sometimes wonder how far I’m actively misleading people if I say ‘EA is about all these obvious things you agree with, like helping others more rather than less, so it’s a wonderful opportunity to help as many people as possible’.

I almost prefer the framing ‘EA is a wonderful and highly challenging intellectual movement that aims to do the most good, a principle which often (but not always) translates to reducing as much misery and suffering as possible. Different EAs hold widely diverging views on how to do this, and have come up with conclusions that you may find enlightening, counter-intuitive, fascinating or deeply disturbing. Whatever you end up thinking about EA in the end, you have to grapple with its arguments about how and why we’re prioritizing our time and resources, both as societies and as individuals. I think it’s one of the best attempts at thinking through ethics with reason and evidence that’s ever existed, and it may be the single best thing you ever learn about — but don’t be surprised if it makes your head spin or makes you feel uncomfortable, sometimes deeply so.’

This would probably not work as a way of marketing EA (!) But it seems to me it would be a whole lot more honest about the huge universe of EA that lies beyond poverty relief and GiveWell – from MIRI and Animal Ethics to the Future of Humanity Insitute and Leverage Research.

EA collides with other movements more easily than it endorses them

Most other movements endorse intrinsic values, like equality, inclusion, lack of oppression &c. For example, feminists (usually) value equality between men and women for its own sake.

EA’s utilitarianism works very differently, as nothing really matters intrinsically other than the reduction of suffering and (depending on your variant of utilitarianism) the increase of well-being.

Let’s take a concrete case where these approaches are at loggerheads. A feminist would see equal representation between men and women in a broad social movement as intrinsically valuable, whereas an EA wouldn’t see that as a goal in itself. If a movement is 75% men and 25% women, that’s only a bad thing if the consequences of that imbalance are bad.

This conflict came up in a real world discussion on the EA forum. To paraphrase a prominent EA: ‘we shouldn’t be worried about getting more women in EA for its own sake, as the point is to do the most good’.

This may push a few moral buttons for some readers (it certainly did for me). If pressed on this point, I think most EAs would respond that an indifference to equality feels wrong only because a non-inclusive policy within EA has net negative outcomes.

This is a small example of a wider problem (or difficulty, depending on your view): EA’s explicit rejection of the intrinsic value of equality, fairness, justice &c., and it’s willingness to run the calculations to see if these values make sense, make it harder for EA to co-exist with other social movements that are more likely to see these values as sacrosanct.

EA is overwhelmingly utilitarian

I don’t think I’ve met any EA who doesn’t identify with some kind of utilitarianism, at least in so far as believing it’s the best theory with which to work out the most ethical actions.

This means that other moral views don’t get much attention within EA. There are a few people within EA (like Carl Shulman and Nick Bostrom) who do seem prepared to give other moral theories (like deontology) weight, even without assenting to them themselves.

I think they’re on the right track. I don’t think that utilitarianism is the only theory worth our time, just that it’s the most plausible one. Of course, when we’re trying to model reality, the most plausible theory is the one we should give assent to. But ethics is about making decisions, rather than trying to model reality, so it may help to think about our ethical theories less as competing scientific paradigms, and more as guides to making decisions.

But how can you act on multiple decision making processes when they recommend sharply contradictory things, as utilitarianism and deontology can do? It’d be like obeying the order to ‘stand still’ and ‘run’ at the same time.

I find that Nick Bostrom’s idea of the moral parliament is really helpful here. The details of the theory are complex, but the overall idea is simple. Instead of just assenting to the most plausible ethical theory and giving it absolute majority every time we’re faced with a moral choice, we should instead model our moral thinking as a negotiation between different moral theories which we give a variety of credences.

Theories we give a lot of credence get many seats in our personal moral parliaments, and theories that we give little (but non-zero) credence get few seats. So we may end up with a parliament consisting of 95% utilitarians, but with a few beardy Aristotelian virtue theorists making up the remaining 5%. Each time there’s a moral case put before us, the different parties of our moral parliament have to decide what their votes are going to be — and that involves a lot of negotiation between the various factions.

This allows for the strongest arguments of a variety of moral theories to have some bearing on our decisions. (Bostrom’s original post outlines how to arrange the parliament’s voting to prevent the parliament constantly going with whatever the largest party thinks.)

For example, Kantians would be particularly outraged if we secretly killed a prisoner so we could harvest their organs for transplant into ten other people, even if the prisoner listened to Limp Bizkit unironically. It’s this kind of situation where Kantian concepts like the ends vs means distinction have their greatest appeal, and where utilitarian justifications have the hardest time specifying why killing someone to harvest their organs is wrong[3].

In Bostrom’s moral parliament, the Kantian party would really dig their heels in to save the prisoner, and their voices would matter — even in the moral parliament of someone who gave an 80% vote share to utilitarians.

So I think the moral parliament model offers some way of introducing a little moral diversity into EA, by allowing people to give some credence to other moral theories without actually assenting to them as the correct moral theory. It could allow people who have some trouble with totally identifying with utilitarianism a way of reconciling their beliefs with EA.

That said, this solution is very limited. The concept of ‘doing the most good’ is basically utilitarian in itself, so I don’t think EA is really compatible with avowedly non-utilitarian thinking. Ultimately, only people who have a good majority of utilitarians in their moral parliaments are going to be able to get on-board with EA. Thankfully, I think utilitarians have a majority in my moral parliament, so I’m alright.

EA shouldn’t be thought of as a complete theory of ethics

A lot of moral theories have given thought to how ethics fits into a wider context — how you should act towards your family and friends, what obligations you owe to society, what the best social virtues are, and so on. (For example, questions like these are at the centre of Confucianism.)

EA doesn’t really have any of this worked out. That’s okay to a degree. It would muddy the waters to mix in discussions about the duties of parents to their children with discussions about randomized controlled trials or the best ways to avert existential risks.

But it needs to be explicitly acknowledged that EA does largely ignore these kinds of interpersonal and societal questions, but this does not mean that all of these areas of ethics are unimportant.

This might be a bit abstract. So here’s a concrete example:

Let’s say you want to call grandma, but you also want to read a book. You know she can get quite lonely, but you also feel that you’ll get roped into a long conversation that you won’t particularly enjoy.

This is a pretty low-key example, but it involves lots of ethical questions: How far do I have obligations to my extended family? How far does anyone else have a claim on my time? Is calling grandma really the best I can do to help — maybe I should do something more practical instead? Is providing help and support to grandparents the role of the individual, or of society?

Imagine an EA bypassing these questions and saying ‘whatever you do for yourself or for your grandma has a lower expected value than giving the same time to working an extra couple of hours and giving the money to the Against Malaria Foundation’.

At this point I’d argue that person has mistaken the language of EA for a complete language of ethics. So how can EA accommodate this?

EA could relate to our interpersonal and societal ethics by coming up with the familial/interpersonal/societal relationships and norms that allow people to do the most good. In other words, EA could try to come up with models of the family and society that support EA goals. I imagine the familiarity of people in EA with game theory and social psychology &c. could help here.

Alternatively — and more realistically — EA could just work alongside a wider view of ethics without explicitly offering any answers itself.

The Giving What We Can 10% pledge seems a good example of this. There, you give 10% of your income to doing the most good. With your obligation/opportunity to be effectively altruistic met, you can now figure out for yourself whether or not you’re obliged to spend your evening calling grandma when you’d rather read a book.

This means you can ponder the wider questions about family and society without adopting an explicitly EA framework. Of course, your thoughts on these questions might be informed by your EA views — a view of society that saw your family as the most important ethical consideration would soon be rejected.

But you would also reject the view that an action is an ethical action if and only if it’s the action you can justify in EA terms. You would reject directly comparing ‘calling grandma’ with ‘buying bednets’. Instead, you’d see EA as describing part of your moral life, rather than literally all of it.

In so doing, you’d take the view that ethics is wider than EA. This is the solution which seems the most plausible to me.

COMMUNITY

People are generally nice and friendly

In my experience, it’s been generally unusual for any EA to be less than civil, and I’ve found lots of EAs to be nice and welcoming. Out of all the encounters I’ve had, I can think of exactly one time when someone has been outright rude and condescending towards me without cause. Given how many people I’ve met through EA, that’s pretty impressive.

I was extremely lucky to meet Holly Morgan and Larissa Rowe early on, as they’re both very smart people who genuinely listen to and engage with what you’re saying.

Being around EAs has also helped me level-up in life — and I mean leveling-up by my lights and given my goals. A standout example of that is Complice, a tool I only learnt about through briefly working at the EAF offices in Berlin, and which now is at the basis of everything I do at work.

Meeting EAs made doing EA-type things easier

Just knowing people who donate lots to charity made it easier for me to do the same thing. It’s probably been one of the most valuable things I’ve got out of EA.

Being around EA animal advocates has also helped me reduce the animal products in my life still further (from a vegetarian beginning). This isn’t solely down to effective altruism — my amazing vegan partner and her culinary skillz really helped with this too — but EA has certainly played a part.

EAs can be different in conversation to online

For the first six months or so, reading the EA forum/Facebook group was always pretty intimidating, not least because the benchmark for saying something new and interesting was so high.

There are strong selection effects. You only see the people who are contributing to any given discussion, who are often the people who are most able to. It’s easy to forget that the vast majority of EAs don’t contribute to the discussions online. If you’re not able to add anything, you’re in the majority.

Unfortunately, if you don’t keep this in mind, it’s easy to come away with the impression that everyone is an expert about everything. I have to fight this tendency to see everyone in EA as having all the collective knowledge that the hivemind does, as absurd as that sounds.

I’ve found this much different in conversation. In person, many EAs come across as unsure of lots of things. They haven’t read all the non-fiction books you really should read. They’re not aware of everything. The standard for saying something new and interesting is generally lower.

I’ve also found that, in person, some people are more willing to express doubt about fundamental foundations of EA, or speak much more speculatively. I’ve actually come to enjoy talking in person more, as it takes the pressure to be a Very Clever Person off a bit, and leads to people being more willing to admit what they’re confused about.

This leads me onto another key thing:

A sizeable minority of EAs are really unsure about things

Back when I first started meeting EAs, I was really unsure if I was the only person who felt utterly outgunned. It’s not easy spending your time in an ecosystem where there are people who can argumentatively and empirically outflank you when you talk about anything you care about.

I wondered if I was the only person who felt this way. Talking to more people, reading more threads and seeing posts in the EA self-help group made me realize that I wasn’t that unusual. Other people felt unsure or confused, and a bit bewildered when faced with lots of brilliant people confidently making contradictory claims about how you should spend your time, money and resources.

I think EA could use some more resources for guiding people through the practical business of making decisions about how and where to give. I think 80,000 Hours’s problem profiles are the best example of that so far, in that they help readers get clearer on what judgement calls they need to make when deciding between different cause areas.

It’s easy to put more effort into ‘knowing all EA things’ than doing the ideal EA actions

The ideal EA action could be earning to give or becoming better at whatever field you could make a uniquely impactful contribution to.

It’s probably not drinking from the firehose of endless EA articles and discussions on Facebook.

But it’s very easy to do this. It’s very easy to think that by spending lots of time keeping up with EA, you’re doing something morally valuable.

Obviously, there’s a degree to which being better informed can greatly magnify your impact, owing to the huge amounts of expected value just waiting to be stumbled across like rare Pokémon. And for some of the academics and researchers in EA, it’s clearly part of how they have the biggest impact.

But that may not be the case for everyone. If we’re serious about improving the world, there has to be some payoff for the effort we put in to finding out about EA. It has to lead to some actions that change the world (and the future of the world) for the better.

So I’m trying to reduce the net amount of thinking/reading I do about EA. It’s not easy. All the cool discussions that happen on Facebook/the EA forum/in person strongly incentivize me to spend lots of time becoming informed so I can contribute to them. But surely contributing to discussions isn’t my end goal as an EA!

So instead, I’m trying to figure out how I can best use my time, money and skills to help EA-approved organizations — even if it means not keeping up with as many of the cutting-edge problems in EA as I have tried to.

MY NEXT YEAR IN EA

I really want to use my skills to help EA organizations, probably through volunteering — and I really want to develop those skills so they become more useful. I think this will give me a bit more focus than just ‘learning about EA’ in general.

I’m going to think harder about what impact I’ll have — I’ve had a bit of a tendency to offer my help any time I can. I think I need to really think about what an organization’s needs really are and whether I’m willing and able to give the level of support they need for a given project. (Sometimes the answer to this will be ‘oh my goodness yes I do!’; they’re the projects I really want to help.)

I’m also going to rethink the targets of my donations. Even if I end up keeping my donation targets the same, I want to put to use some of what I’ve learnt about the different causes and organizations beyond GiveWell.

I’ll also probably go to the next EA conference in Oxford. And if I get another chance to scrawl on someone’s arm about Peter Singer, I’ll take it.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

[1] Eg deliberately cutting down an area of virgin rainforest and replacing it with farmland, so long as we’re confident there will be less overall wild animal suffering when it’s been changed. However, note that (a minority of?) EAs reject the view that suffering in the wild is net negative.

[2] I thought about linking to these discussions, but I’ve decided that’s probably not a worthwhile thing to do. (If you’re someone I know and trust and you want to know where I saw any of this, I’ll happily say privately.)

The point here is not to single out individual comments or people. I just don’t want to pretend these conversations don’t happen, nor do I want to pretend that EA is fundamentally about relatively noncontroversial causes like bednets and factory farms.

EA’s radicalism comes partly from the acceptance of so many EAs to bite bullets and favour reason over intuitions — and I’m sure that will lead to many more conclusions that the broader public will find hard to swallow. In this regard, I think EA is similar to many other movements throughout history.

[3] I’m not denying that utilitarian justifications for not killing the prisoner exist. There are lots of them. But I do think they have the least appeal here. They feel like workarounds compared with the solid Kantian injunction to not do it under any circumstances.

--

--