Some Good, but maybe not The Most Good

Sarah Simpkins
Some Good
Published in
9 min readFeb 1, 2023
Photo by Tim Wildsmith on Unsplash

At the very beginning of my first semester as a philosophy student, I had high hopes that I would be able to write on Medium while earning good grades and applying for graduate programs.

That may have been a bit optimistic.

It is currently Week 3 of Semester 2, and I just submitted the last of my applications for graduate school. Without the pressure of juggling applications alongside schoolwork, I’m hoping to write more this semester. I’m also hoping that writing helps me avoid pacing around my room in circles for the next two months, waiting on application decisions.

Fingers crossed…

Outside schedule constraints, there was another reason I waited a bit to come back to this publication.

I’ve thought deeply about my position on the Effective Altruism (EA) movement since writing my initial post here. In that post, I talked about my interest in philosophy and about how EA played a role in sparking that interest. One particular quote from that post has been bothering me since:

There is another behavior of effective altruist philosophers, economists and public figures that I appreciate: they’re willing to be wrong. Publicly.

I’m not sure I can add much value to the conversation about EA in the wake of the FTX bankruptcy. However, this publication — like most of what I write on Medium — is a learning project. As evidenced by this quote, I still think there is value in sharing thoughts and ideas before they are ready to be published in academic journals. I needed to write this, to help me understand what I think.

I’ve linked several references at the end of this post if you want to read more on EA generally and on EA post-FTX.

Back to the quote, and the larger question it begs:

What if someone is so wrong that they are not only doing less good than they could be doing, they are doing harm?

Some philosophers (and maybe some economists) would push back on this question by arguing that doing less good than we could be doing and doing harm are actually the same. After all, using our time and resources in a non-optimal way means that someone else doesn’t get to benefit from our time and resources.

In some framing, this argument that not doing the most possible good is equal to doing harm feels intuitive. If I know that a charity where my friend works only helps one person per $10 donated, but another charity helps 10 people per $10 donated, am I not doing some kind of harm by donating to the less efficient charity? What if I don’t know anyone that works at either charity and I just pick the less efficient one at random, accidentally?

This seems like a strong argument that we should all have access to good information about charities. However, this doesn’t seem like a strong argument that not helping people as much as possible and harming people are the same thing. After all, helping one person instead of 10 (less good than the maximum available good) is not the same as going to the 10 people and stealing all their money (active harm).

Despite recent events, real-world scenarios aren’t always that straightforward.

The Risk of Causing Harm

Will MacAskill opens his book Doing Good Better with an example of an invention called the PlayPump. If you haven’t read the book, I recommend reading his full account of this scenario. In summary: a charity replaced existing water pumps in many African villages with a pump that resulted in more work for villagers, which was also more expensive and more difficult to fix. This example is used to show how trying to do good can actually do harm.

In my opinion, the PlayPump example provides a strong argument for testing philanthropic interventions before widely rolling them out. We should ask people what they need most, and what would help them most. We should pilot interventions on a small scale to see if they make sense in context.

In short, we should not assume that we know everything.

At this point, it’s worth asking whether it actually matters if I think not doing the most possible good is not the same as doing harm, but someone else thinks that not doing the most possible good is the same as doing harm. Up until this point, we seem to be coming to the same conclusions: helping more people is better than helping less people, piloting philanthropic interventions to see if they work is good, stealing people’s money is bad.

Ah… but wait.

Would we both believe that stealing people’s money is bad?

If I were a person who believed that not doing the most possible good is the same as doing harm, then I could plausibly believe that stealing 10 people’s money (active harm) is justified if I use that money to do more good than those 10 people were going to do (because those people not doing the max good with their money are also doing harm). I’ve seen some consequentialist thought experiments that would work out in favor of stealing the money if enough people were helped as a result.

Most of us intuitively believe that this is a problem. “Two wrongs don’t make a right,” as my grandmother would have said. But philosophers tend to loathe what they call “common sense morality”, so they would not consider my grandmother’s argument very convincing.

This is one of the reasons I started studying philosophy. People doing things in the real world that actually cause harm, in the name of doing good, based on a theoretical moral position or a philosphical thought experiment, scares me. There’s a real risk to real people when philosophical (and economic) theories get taken to weird extremes then get applied in real-world policy, and I want to mitigate that risk as much as possible.

One way to mitigate the risk of causing harm is to draw a line well before we reach weird extremes.

Since we’ve been arguing about it since Socrates, we may never have universal agreement on moral philosophy. However, the lack of universal agreement on philosophical theory is not a reason not to help anyone. We do know some things about helping people, even if we don’t know everything.

While I am happy to report that I am still not a nihilist, this brings us back to the problem that worries me in the post-FTX world:

Is the opposite of nihilism any better than nihilism?

Let’s define nihilism simply: we don’t know anything. If this is our definition of nihilism, then the opposite of it would be something like: we know everything.

If alarm bells are now ringing in your mind, that is because they should be.

The Dangers of Maximization

There is a central problem across moral philosophy, which is not unique to philosophy, but we’ll stay on task for now. For lack of a better term (since there probably is a better term that I just haven’t learned yet), let’s call this the problem of drawing a line.

The thing about moral philosophy is that the extreme end positions of many theories are very strange. They often include things that would not work in the real world, would result in a world that real people would not want to live in, and/or would provide the foundation for an excellent science fiction novel.

It is clear to see why some constraints might be needed on a central mantra like: maximize welfare at all costs. Of course there is the age-old problem of figuring out what welfare is exactly, but more importantly, there are some costs that real people in the real world would see as too high.

Do philosophers have any incentive to draw a line to avoid these costs?

Unfortunately, some philosophers may not see any incentive to moderate or constrain their position. After all, if you are a philosopher that cannot be fired because you have tenure, why would you voluntarily want to give up ground on the moral theory you’ve spent your entire career trying to promote and defend?

If you’re wondering: “Why does it matter if a few Ivory Tower philosophers defend science fiction moral positions that are way too extreme for the real world and counterintuitively seem morally dubious?” then you are asking a good question. Asking why things matter is always important, especially in philosophy. The short answer to this question is that a movement like EA has the funding and power to take their philosophical thought experiments out of the philosophy journals and into the realm of real-world policy.

And this isn’t necessarily a good thing.

We Don’t Know Everything

One of the things that initially interested me about EA, which I highlighted in the quote above, is that EA philosophers seemed more willing to admit that they do not know everything than other philosophers and economists that I listen to and read. This struck me as pragmatic, and as a non-philosopher who spent a lot of time working in the real world before having the opportunity to think about philosophical theory all day, I tend to like pragmatic. But after we admit that we don’t know everything, what then?

From my current position as barely-a-philosopher-in-training, I tend to think we have a better opportunity to do some good in the world, and to avoid doing harm, if we’re willing to back away from maximization. I’m not going to say common sense is infallible, but I’m also not going to say it is useless. Backing off the quest to maximize mitigates the risk of doing harm while trying to do good, by applying some constraint on how far people should go, and also by allowing us to pursue doing good in several different areas in case we are wrong about one. To avoid doing harm on the way to doing good, lines have to be drawn… and they should be drawn well before anyone reaches the at all costs part of a mantra like maximize welfare at all costs.

To be very clear: maximize welfare at all costs (or more generally the ends always justify the means) is an idea that can lead to a lot of harm. This is an example of extreme maximization, and it should not be anyone’s position in the real world. If you are a public-facing EA philosopher whose ideas have the power to impact real-world policy, you also can’t afford to argue for this position, even in a thought experiment.

Another dose of common sense to annoy the philosophers before we close:

With great power comes great responsibility.

I’ve always thought that EA has a huge opportunity to improve the world. EA’s interdisciplinary, international audience certainly has made — and could continue to make — progress on some of the world’s most pressing problems. EA still has the power to do some good. Maybe not the most good, but some good.

Is that not a worthwhile goal?

Some Good

When I first started this publication, I was hoping to title it Good. Unsurprisingly, there was already a publication called Good on Medium, so I had to think of something else. My Southern manners must have combined with my imposter syndrome, because I immediately rejected More Good and The Most Good. They sounded… arrogant. Honestly, I wasn’t sure I could deliver on the promise of those titles.

So I settled on Some Good. At the time, it seemed like a weirdly noncomittal title for a publication. But after a semester of studying philosophy, I’m very happy with it.

Because after a semester of studying philosophy, I think that we know nothing is not true.

But we know everything is also not true.

I’m hoping to do some good here in the middle, between those extremes.

References & Notes

MacAskill, Will. Doing Good Better. https://www.effectivealtruism.org/doing-good-better

The Vox Future Perfect team’s coverage of FTX and EA in the fourth quarter of 2022: 16 total articles, all worth a read

  • I read this article by Dylan Matthews after writing the post above, and it takes a more structured approach to many of the same points I cover here — definitely worth a read

If you want to read more from the EA community, the Effective Altruism Forum is a good place to start

--

--