Utilitarianism is not unfair
Why exploiting minorities is not utilitarian
This is the sixth of a series of articles defending a compatibilist interpretation of utilitarianism, which can be reconciled with all major moral theories. In the previous article, I explain why some types of suffering deserve more attention than others.
Utilitarianism is the moral philosophy that promotes the greatest happiness for the greatest number. One of the objections critics raise against it, however, is that it justifies unfairness. Fairness or justice can be interpreted in a couple of different ways. One type of unfair situation is one in which a person is treated in a way they don’t deserve, while other people in similar situations receive privileged treatment. This is best illustrated by the Sheriff Scenario, by H. J. McCloskey.
Suppose that a sheriff were faced with the choice either of framing a Negro for a rape that had aroused hostility to the Negroes (a particular Negro generally being believed to be guilty but whom the sheriff knows not to be guilty) — and thus preventing serious anti-Negro riots which would probably lead to some loss of life and increased hatred of each other by whites and Negroes — or of hunting for the guilty person and thereby allowing the anti-Negro riots to occur, while doing the best he can to combat them. In such a case the sheriff, if he were an extreme utilitarian, would appear to be committed to framing the Negro.
— H. J. McCloskey, 1957. An examination of restricted utilitarianism.
Another type of unfair situation is one in which there is extreme inequality. In a utilitarian society, some say, a ruling mob is permitted to torment a minority as long as their suffering is compensated by the pleasures of a sufficiently large majority. This is well illustrated by the short story The Ones Who Walk Away from Omelas, which describes a summer festival in a utopian city called Omelas, where its inhabitants live in a perpetual state of bliss at the expense of a child who lives in misery. Even if the child is picked randomly, in a process that is “fair” in the sense that nobody is given priority, most of us still feel that such a situation is deeply immoral.
A number of solutions have been traditionally proposed to solve this apparent flaw of utilitarianism. The first is rule utilitarianism, already discussed in a previous article. Rule utilitarianism, unlike act utilitarianism, focuses not only on the effects of an individual action, but on the effects of adopting that action as a norm. Framing an innocent person may seem like the utilitarian thing to do when you think of that action in isolation, but if you think about accepting this type of action as a rule, it is clear that it would have very bad long-term consequences.
As I have argued, I prefer to think of the morality of our actions not necessarily in terms of the rules they break or respect, but simply in terms of their long-term consequences. According to my long-termist interpretation of utilitarianism, the best thing to do is always that which most contributes to maximizing the quality of the experiences of sentient beings, from now until the end of time. A society in which the rights of individuals are routinely disrespected in order to maximize the happiness of certain communities is a society in which individual rights don’t really mean much at all, and we all know such societies are very far from utopias.
Another solution is negative utilitarianism, the view that the minimization of suffering should always take priority over the maximization of happiness. Essentially, negative utilitarians consider that negative experiences weight more than positive ones in the utilitarian calculus. If an action causes a person to experience suffering with a certain intensity for a certain number hours, you cannot compensate for this and make the action acceptable by giving the same amount of pleasure with the same intensity to another person, or even to that same person later, because suffering weights more than pleasure. You should give them more pleasure in order to compensate for it, and sometimes no amount of pleasure will be enough to compensate for a sufficiently terrible amount of suffering.
This would mean, for example, that while punching one person to prevent two people from being punched may be justified, punching someone so that a million people get a day at a spa resort is not. Causing suffering to prevent greater suffering is permissible, or perhaps even our duty. Causing suffering to increase the happiness of people who are not suffering, however, is almost always wrong, no matter how many people would benefit. Negative utilitarians may disagree on whether it is always wrong to cause pain to obtain pleasure. Some may say for example that giving somebody a mild shock may be justified if the rest of the world population is guaranteed one hour of bliss. They mostly agree, however, that torturing someone for the rest of their lives is never justified, no matter how many will benefit.
The problem with negative utilitarianism, however, is that for many it seems to have counterintuitive implications. One of these implications is anti-natalism, the view that the asymmetry between pain and pleasure is so deep that it makes it immoral to have children or basically create any form of sentient life. If a die roll could give you either a year of extreme bliss, if the result is greater than one, or a year of absolute agony, if the result is exactly one, would you roll that die? If your answer is no, even if you are allowed to increase the number of sides on the die, then by having children you are forcing a gamble on someone even though you don’t think that gamble is worth taking yourself.
Another even more extreme implication is pro-mortalism, which is illustrated by the “benevolent world exploder” scenario, by R. N. Smart. If the imperative to end suffering is so powerful, not only would it be wrong to create new sentient beings, it would be justifiable to euthanize all life on Earth to end all suffering once and for all, even if some people happen to be having a good time and will be prevented from continuing their pleasurable life.
Many negative utilitarians are willing to bite the bullet and embrace anti-natalism. Some even embrace pro-mortalism, although they know that in practice there is no way they could actually succeed in euthanizing all sentient life in the universe. So are anti-natalism and pro-mortalism inevitable entailments that we simply have to come to terms with if we accept negative utilitarianism? Not necessarily. Although negative utilitarians agree that suffering weights more, there is no consensus on how much more it weights. Perhaps a universe with some suffering and a lot of pleasure is better than a universe with no experience at all, as long as the total amount of suffering, collectively and individually, is mild enough.
Besides, even if we assume that any amount of suffering, no matter how mild, really does weight infinitely more than pleasure, that still doesn’t give us reason to embrace anti-natalism or pro-mortalism in practice, even if you accept it in theory. The secret is to look at the big picture, and think as a rule utilitarian, not an act utilitarian. As negative utilitarian philosopher David Pearce argues:
Is the most effective way to minimise, prevent, and ultimately abolish suffering (1) human extinction via radical anti-natalism? Or (2) genetically reprogramming the biosphere?
In other words, even if anti-natalism seems to be a logical entailment of negative utilitarianism, this doesn’t mean it’s a viable strategy in practice. Humans have an instinct to have children. It is a powerful instinct, and it is to some extent a result of our genetics. If individuals with low broodiness (i.e. low desire to procreate) remove themselves from the gene pool, in a few generations all we will do is increase the number of people who are so broody that they are unlikely to be persuaded by anti-natalist arguments. It might be more effective, therefore, to focus on other strategies to minimize suffering. Pro-mortalism is even more ridiculous when you think about it in pragmatic terms. There’s simply no way we could euthanize all sentient life in the universe, or even on Earth, without risking a catastrophe that would fail to kill us and would instead set us back hundreds of years in our fight against suffering.
Another example of a bizarre and questionable thought experiment is the “utility monster” by anti-utilitarian philosopher Robert Nozick. According to his original version of the experiment, if a creature feels pleasure with sufficiently more intensity than the rest of us, we would be morally justified in sacrificing the well-being of everyone just to please this creature and thereby increase the overall happiness of the universe. A negative utilitarian could respond that this is not true because suffering counts more, but we could easily adapt the scenario to attack negative utilitarianism as well. What if the monster lives in a constant unbearable state of misery that can only be alleviated by eating humans?
Honestly, I would say give this monster a quick and painless death as soon as possible. What if the monster is immortal? It is hard to see how this question can be relevant. If you’re allowed to fabricate bizarre thought experiments with no constraints, you can attack any moral theory. Nozick was a minarchist libertarian, against welfare, against taxation, and in favor of a minimal state. His moral axiom was not “maximize happiness”, but “maximize freedom”. The only constraint on this axiom is another axiom, the non-aggression principle, according to which nobody has the right to aggress anyone.
Sure, freedom is great, and a society where everybody is free and nobody attacks anyone sounds pretty good. But let’s say one day a ship sinks and two men and a child end up stranded in a lifeboat. One man was lucky and managed to get enough food for days, while the others have no food. Let’s say, for the sake of argument, that they all know that there is enough food to keep everybody alive before they are rescued if they ration. If they don’t share, however, the ones without food will die. If the man who has food refuses to share, is it acceptable to attack him and steal part of the food? According to Nozick’s radical libertarianism, it seems it wouldn’t.
Alternatively, what if you stop at at an accident by the side of the road and there’s a victim who’s bleeding to death and begs for your help. There’s a hospital a few kilometers away and you could easily take them there. Is it morally permissible not to help? Again, according to extreme libertarianism, it would be permissible to leave the accident victim to die. And I didn’t even need to resort to monsters, sci-fi simulation machines and surgeons with superhuman skills in predicting the outcome of risky gambles and getting away with murder. Negative utilitarianism may have strange hypothetical implications when you think on a very abstract and theoretical level, but it has virtually zero strange implications in the real world.
The other commonly proposed solution to the problem of fairness is the one offered by John Rawls in A Theory of Justice. In this book, frustrated by the constant amendments philosophers keep adding to utilitarianism in order to make it compatible with notions of justice and other seemingly non-utilitarian moral intuitions, Rawls attempts to find an alternative moral theory with a built-in solution to the problem of injustice. The resulting theory is considered to have revived the social contract tradition, which claims an action is immoral if it violates the social contract we have either explicitly or implicitly accepted by living in a society.
Rawls’ version of contractarianism is illustrated by his “original position” thought experiment. Imagine you are part of a committee of spirits who are in charge of defining the rules governing humanity. Once you manage to agree on a set of rules, you will each be born in a random position in this society. From behind this “veil of ignorance”, you wouldn’t know whether you’d be born as the prince of Sweden, or as an undesired blind child in a hunger stricken village in Sub-Saharan Africa. What rules would you choose in this situation? According to Rawls, the rules agreed upon by this committee would be by definition the fairest rules possible. Living according to those rules, therefore, would be what it means to be moral.
Although Rawls presented his theory as an alternative to utilitarianism, does it really conflict with it? Not really. It would conflict with utilitarianism if, given a specific moral question, utilitarianism gave one answer and Rawlsian contractarianism gave another. I don’t believe there is any such question. Furthermore, Rawls claims that his moral system would entail a set or rules that focuses on preventing the quality of life of the worst-off from ever dropping below a minimum threshold.
Justice then requires that any inequalities must benefit all citizens, and particularly must benefit those who will have the least. Equality sets the baseline; from there any inequalities must improve everyone’s situation, and especially the situation of the worst-off. These strong requirements of equality and reciprocal advantage are hallmarks of Rawls’s theory of justice.
— Leif Wenar, 2017. Stanford Encyclopedia of Philosophy.
In what way is this different from negative utilitarianism? As far as I can see, both theories are just different ways of describing the same innate moral intuition: suffering is bad, no matter who is suffering. It is impossible to escape pain and suffering when one talks about morality. Based on what will Rawls’ ghostly committee choose the rules of the society they’re going to be born into, if not the fear of suffering and the desire of pleasure? Rawls’ thought experiment is beautiful because it forces us to adopt an impartial position, but utilitarian philosophers have also emphasized the importance of impartiality in the utilitarian calculus.
So far we have only been considering the ‘Good on the Whole’ of a single individual: but just as this notion is constructed by comparison and integration of the different ‘goods’ that succeed one another in the series of our conscious states, so we have formed the notion of Universal Good by comparison and integration of the goods of all individual human — or sentient — existences. And here again, just as in the former case, by considering the relation of the integrant parts to the whole and to each other, I obtain the self-evident principle that the good of any one individual is of no more importance, from the point of view (if I may say so) of the Universe, than the good of any other;
Classical utilitarianism may leave some open questions when it comes to fairness, but that doesn’t mean it tolerates unfairness. It only means it doesn’t draw a sharp boundary between fair enough and intolerably unfair, leaving this question open for different negative utilitarians to answer. The original position scenario is useful because it gives us a helpful heuristic for establishing where exactly is the maximum threshold of suffering that we should allow in a society, answering a question traditionally left open by negative utilitarians. If the goal of utilitarianism is to maximize happiness without letting any minority, no matter how small, suffer too much, then Rawls’ thought experiment is best viewed as a useful tool to achieve a utilitarian purpose, making Rawlsian ethics complementary, not contradictory to utilitarianism.