Stephen Casper, thestephencasper@gmail.com

Euconoclastic blog series

This is your brain on Eutilitarianism

Eutilitarianism is Euconoclastic

Euconoclastic (ajd.) \yu̇-ˈkä-nə,-klast-ic\: iconoclastic in a good and virtuous way.

It’s ridiculous that some people criticize utilitarians for wanting to tile the universe with rat brains on heroin. I highly doubt that rat brains would be the optimal hedonic engine for such a task, nor heroin the drug.

“The ends don’t justify the means!”

“It’s so cold blooded!”

“There’s more to morality than body count!”

“That’s just not how it works!”

“It’s not about what you do — it’s about what you justify!”

“But that’s UTILITARIANISM!”

These are all things that people have told me about utilitarianism at some point or another. It doesn’t really bother me though. Most people may disagree with utilitarianism, but we humans sometimes really miss the mark.

When it comes to the important issues, we’re probably not going to make much progress by following the crowd. Our intuitions evolved to facilitate primitive social coherence in the Great Rift Valley of Africa. What are the odds that default-mode human reckoning is going to guide us toward peaks on the moral landscape? That’s something completely different from what it evolved to do.

If most people agree on something, maybe that should be a red flag. It was Mark Twain who wrote, “Whenever you find yourself on the side of the majority, it’s time to pause and reflect.”

And we have utilitarianism. Richard Hare writes:

“The most common trick of the opponents of utilitarianism is to take examples of such thinking, usually addressed to fantastic cases, and confront them with what the ordinary man would think. It makes the utilitarian look like a moral monster. The anti-utilitarians have usually confined their own thought about moral reasoning (with fairly infrequent lapses which often go unnoticed) to…the level of everyday moral thinking on ordinary, often stressful, occasions in which information is sparse. So they find it natural to take the side of the ordinary man in a supposed fight with the utilitarian.”

John Stewart Mill described utilitarianism simply as “the greatest happiness principle” which is very accurate. Utilitarians like things inasmuch as they promote happiness/pleasure/joy and prevent pain/misery/suffering. When I describe utilitarianism, I like to break it down into hedonism, consequentialism, and aggregationism. Hedonism is the theory that pleasure/happiness is the fundamental moral good and pain/suffering is the fundamental moral bad. Consequentialism is the theory saying that actions should be evaluated by the goodness and badness of their consequences. And aggregationism is the theory that we should maximize moral good for everyone irrespective of distribution.

Learn more about utilitarianism here. It’s a uniquely good resource.

And this other (short) post of mine might be good to read first.

Why am I on board? Let me talk about the parts individually: hedonism, consequentialism, and aggregationism. Along the way, we’ll haphazardly jump from principle, to argument, to thought experiment crash-course-style through this mess of philosophy that I hope you will find dense, thought-provoking, and very euconoclastic.

Hedonism

The Hedonic Thesis

While most people aren’t comfortable prima facie with the idea that all that matters is happiness and suffering, only a true fanatic would say they don’t matter at all. Still though, most people say that other things are good too like truth, beauty, trust, virtue, following rules, [insert thing people tend to like here], etc. But I think that given the right frame of mind, these people might realize that they are really hedonists deep down. We value what we value because we think it’s good, positive, or desirable, and we ascribe bad, negative, or undesirable judgements to the opposites of those things. Given this, answering the question of value means answering the question of what is fundamentally good. I think the only answer that can make any sense at all to that question is happiness. Answer me this: have you ever not liked being happy? (I mean to include happiness of all kinds.) Has anything other than happiness ever made you feel good? Has anything you have ever found desirable not positively affected how you feel in some way?

What other than the quality of conscious experience could we intelligibly value? Whether it’s friendship, law, trust, beauty, virtue, or whatever else, if we have real reason to value it, it’s because it tends to affect our experience positively. And I’m not sure which, but I either observe, declare, or define that all purposiveness should be oriented toward some goal involving quality of experience.

What else would make any sense?

Naturalistic Hedonism

Hedonism fits well with our nature: at a certain level of biological abstraction, we’re just Darwinian sacks of self-replicating molecules, and we have minds that reward us for being in heuristically advantageous evolutionary states and penalize us for being in heuristically bad ones. Good feelings are just evolution rewarding us and bad ones are it punishing us. So in a sense, to be a hedonist is to ground value in the quality of experience and our nature. To value anything else is to invent a value in a way that I think could only be seen as substantively more arbitrary. And it usually involves declaring that value’s sanctity and sophistically speaking of it reverently enough to get laypeople to nod their heads.

And maybe you want to have an alternate value system — that’s fine. But there’s one thing you have to recognize if you do: to the extent that you aren’t a hedonist, you do not care about people’s wellbeing.

Experientialism

Experientialism is the idea that in order for something to be good or bad for a person, it needs to affect their experience. For example, could it harm a dead person to, say, slander them (assuming death is like nothingness). I’d say of course not. One might protest, but I think they would be making the mistake of ascribing an outside viewpoint onto the dead person. But thinking badly of someone from the outside doesn’t harm a person on their inside if they never know or find out. To them, the slander is literally not real because it was never known to them and never could affect them in any way. Note that this isn’t just moral perspective but also a metaphysical one.

Deus Ex Machina, Literally

The philosopher Robert Nozick asked us to imagine a machine that could make someone feel extremely happy at no risk to them. He then tried to use this to argue against hedonism. He saw it as obvious that even if a person’s real life would be less happy than being hooked up to a happy machine, it would be better for them to live their real life instead. This position hinges on ascribing some sort of value to the “realness” of an experience, but how could we reasonably go about doing that? To the person in the machine, their experiences are literal reality and what is unknown to them is literally not real to them. So let’s not convolute negative judgements about leaving ones’ life or stigmas about easy pleasures with genuine happiness. Also consider the opposite case: if artificial happiness isn’t so desirable, then it would probably follow that artificial pain isn’t so bad. But imagine this machine were turned to a pain setting and used as a torture device. If someone hooked up to this pain machine, would we be so audacious as to say that pain wasn’t morally undesirable because it was artificial? What type of value worth valuing would this appeal to?

It’s Okay if Everyone Hates You

Next, there’s a famous thought experiment about a person who has coworkers, a family, and people they consider to be good friends, and this person is happy. But let’s say that everyone they know secretly loathes them. Should we feel bad for them? Is the goodness of their life negated at all by the fact that people hate them? I’d definitely say not, and I think the problem with people who disagree is that they hear about this person who has a good life, and then they convolute the good life and the bad perceptions in their mind such that the heuristical, system 1 brain spits out the impression that this bad for the person. Then confirmation biases give them an urge to affirm the impressions (just my hypothesis though. I won’t pretend to be a psychologist.) But this would be failing to recognize is that the intrinsic goodness of this person’s happiness isn’t dampened by things they don’t know and that don’t affect them. And as long as moral good pertains to wellbeing, the factors of the goodness of this person’s life stop where this person’s experience stops.

I’m not Condoning Bestiality, but…

This is, to say the least an interesting thought experiment. Allow me to explain this using an amazing excerpt from Fred Feldman. He asks us to imagine someone named Porky, a human who:

“…spends all his time in the pigsty, engaging in the most obscene sexual activities imaginable … Porky derives great pleasure from these activities and the feelings they stimulate. Let us imagine that Porky happily carries on like this for many years. Imagine also that Porky has no human friends, has no other sources of pleasure, and has no interesting knowledge. Let us also imagine that Porky somehow avoids pains — he is never injured by the pigs, he does not come down with any barnyard diseases, he does not suffer from loneliness or boredom.”

And get this — some people make the argument that porky’s life isn’t good because there’s no variety in his activities, and to this Chris Heathwood replies:

“We can stipulate that Porky does all different things with the pigs, that he does these things on all different farms with all new scenery, that he eventually moves on to other animals, that he eventually starts supplementing the experiences with bondage equipment and drugs (all the while managing never to get bored, addicted, or filled with despair).”

I hope you laughed at this uncomfortably like I did. But now to the point — does Porky have a good life? I think so. It may disgust me, but why would that negate his happiness exactly? Chris Heathwood explains more:

“A life filled with only ‘base pleasures’…still rank[s] high in terms of welfare, but we are inclined to judge it unfavorably because it ranks poorly on other scales on which a life can be measured, such as the scales that measure dignity, or virtue, or achievement.”

Some would argue that porky experiences “lower” pleasures and that his life isn’t all that good. That’s plausible. I definitely believe not all pleasures are equal, but I don’t believe that pleasures aren’t all commensurable. Many people, however, do. One might think that no amount of a lower pleasure such as Porky’s could outweigh even a single unit of a higher order pleasure (maybe stuff like love, friendship, appreciating beauty, or whatever). But if so, consider the opposite case with pain. If base pleasures aren’t so good, this would suggest that base pains aren’t so bad, but would no amount of a base pain like physical torture be so bad as to outweigh a higher pleasure?

Why I’d like to Stick a Wire in Your Head

Another objection to hedonism is made by perferentialists who think that instead of happiness being good, satisfied preferences are good. I think that this is a reasonable position, but I still find myself on the hedonistic side of this nuanced dispute. Perferentialists really split hairs. They only meaningfully diverge with hedonists when a preference and a pleasure conflict. I’m not convinced that there’s a useful distinction to be made here at all, but here are three potential examples of conflicts to mull over. First, imagine a person who loathes country music yet finds themselves tapping their foot and humming along to a country song oneday. Second, imagine a person who wants to harm themselves and does so, causing themselves lots of pain. Or third, imagine a very anti-drug person who is accidentally takes a powerful drug that makes them very happy. Let’s assume that in cases like these, there is a genuine divergence of preference and pleasure (like I said, an assumption I’m not sure it useful, but let’s run with it). The hedonic thesis still gives us a compelling argument: pleasure is the fundamental good, and preference may be a very good correlate, but it isn’t the same thing (supposedly). I see why people might look at these situations and think that preferences ought to dominate our judgements, but I urge caution here. I think that the perferentialist view on these situations might be the result of missing the point. The question to be asking is whether there is morally desirable happiness or pain despite preference in these situations — not whether the moral value of these situations is defined entirely by the pleasures despite preferences. For example, the country music hater might feel disgust upon noticing that they hummed to the song, and the whole experience might be net-negative for them, but this does not imply that the bit of pleasure that went along with the song is completely morally irrelevant. Assuming a genuine divergence of preference and pleasure here, I believe that the little bit of pleasure still counts for something.

So um, yeah. If I could stick a wire into the back of your head that would make you sit in a chair all your life, drooling, but constantly experiencing orgasmic euphoria, I probably would — even if you really didn’t want me to, and I don’t even think I’d feel bad about it.

There Are Many Types of Hedonotrons

A final thought on hedonism: teleologically, biologically, chemically, etc, nonhuman animals also feel pleasures and pains (probably — I admittedly have sympathies for criteria that reserve moral status for agents that can pass the Turing Test, but I won’t discuss that now). For example, I probably couldn’t see myself being convinced that a fish’s pains or pleasures are similar to that of a human’s. However, if we’re hedonists, the conclusion that nonhuman animals matter at least somewhat is compelling, and this has profound implications for how we live, develop, eat, and manage the biosphere.

Consequentialism

The Consequentialist Thesis

To be a consequentialist is to say that the whole point of morality is not to pay homage to some imaginary set of rules or standards; it’s to make the world tangibly better. I’m not trying to trick anyone into consequentialism by saying this. If you agree with this, you are a consequentialist.

Why You Should Clean Up Litter

A good way to think about consequentialism is in terms of what’s called the act/omission distinction. Let’s say that a person litters in a park and then a second person walks past the litter and fails to pick it up. Which action is worse — littering or failing to pick up litter? The intuitive response to this is that littering is worse, but I disagree. From a pragmatic standpoint, these actions are exactly the same: the result was litter on the ground compared to a litterless counterfactual. The amount of harm this litter will cause isn’t affected in any way by what anyone thought or intended.

I think the intuitive response is a mistake because it doesn’t answer the question that was asked. The question was which action is worse, not which person is worse, but because our brains seem to have a tendency to convolute together different things about a moral situation (again, I’m not a psychologist, but I think this a good framework for understanding this), we’re inclined to say the littering action is worse. This makes perfect sense if you think about the purpose and origin of our moral intuitions. Morality evolved as a set of behaviors that allow selfish individuals to reap the benefits of cooperation, and it makes social sense to criticize littering more than failing to pick it up because litterers are the root of the social problem, and it wouldn’t conform to notions of social fairness to expect some people to pick up the litter of others. And while I think it would only be reasonable to say that the litterer deserves harsher social judgement than the bystander, the actions are still exactly the same for all we’re concerned.

The Obligatory Section on Kantianism

Now allow me to introduce deontology — the natural enemy of consequentialism. A deontological theory is one that says that some types of actions are categorically permissible and some are categorically impermissible. And to talk about deontology, we need to talk about Immanuel Kant. Kant is one of the most influential philosophers ever, which I think is unfortunate because he was a very conservative, Christian, racist, sexist, bigot, and sophist. To sum things up (this explanation is not controversial): Kant believes that instead of happiness, the only intrinsic good is a good will, that to act morally and rationally one must act in accordance with universal laws, that these laws have no exceptions, that these laws involve treating persons as ends and not means, and that these laws are ones that are universally and rationally derivative from first principles. That’s a lot to unpack, but let me condense this down to the key takeaways: Kant believes that in order to make sure our intentions are pure, we need to act according to rules that 1) ought to be able to apply to everyone always and 2) never use people as a means.

There are some big problems here. First, Kant gives no reason for us to value a “good will.” Have a look at Kant’s reasoning, annotated by yours truly. This is a little long, but please indulge me.

“There is nothing it is possible to think of anywhere in the world, or indeed anything at all outside it, that can be held to be good without limitation, excepting only a good will. Understanding, wit, the power of judgment, and like talents of the mind, whatever they might be called, or courage, resoluteness, persistence in an intention, as qualities of temperament, are without doubt in some respects good and to be wished for; but they can also become extremely evil and harmful [This begs the question. Also, what does he mean by “evil and harmful.” He’s already moralizing with no foundation.], if the will that is to make use of these gifts of nature, and whose peculiar constitution is therefore called character, is not good [This is an assertion, not an argument. There is no reasoning here.]. It is the same with gifts of fortune. Power, wealth, honor, even health and that entire well-being and contentment with one’s condition, under the name of happiness, make for courage and thereby often also for arrogance, where there is not a good will to correct their influence on the mind, and thereby on the entire principle of action, and make them universally purposive [more question begging]; not to mention that a rational impartial spectator can never take satisfaction even in the sight of the uninterrupted welfare of a being, if it is adorned with no trait of a pure and good will [He’s saying that being happy when others are happy means that good will is the only fundamental value. This doesn’t follow.]; and so the good will appears to constitute the indispensable condition even of the worthiness to be happy. [Here, Kant appeals to happiness in his argument that the only thing of value is something other than happiness.] Some qualities are even conducive to this good will itself and can make its work much easier, but still have despite this no inner unconditioned worth, yet always presuppose a good will, [The only way that his use of “always” could be legitimate is if he’s arguing a tautology that already assumes what he’s trying to show.] which limits the esteem that one otherwise rightly has for them, and does not permit them to be held absolutely good. Moderation in affects and passions, self-control, and sober reflection not only are good for many aims, but seem even to constitute a part of the inner worth of a person; yet they lack much in order to be declared good without limitation (however unconditionally they were praised by the ancients) [Kant seems to forget that he’s formulating a framework for judging actions; not judging people.]. For without the principles of a good will they can become extremely evil [question begging], and the cold-bloodedness [rhetoricizing] of a villain makes him not only far more dangerous but also immediately more abominable in our eyes than he would have been held without it. [Again, he’s convoluting judging people and judging actions.] The good will is good not through what it effects or accomplishes, not through its efficacy for attaining any intended end, but only through its willing, i.e., good in itself, and considered for itself, without comparison, it is to be estimated far higher than anything that could be brought about by it in favor of any inclination, or indeed, if you prefer, of the sum of all inclinations [Not an argument — just an assertion again. And consider this: what would you, with your good intentions, rather have happen? Someone with a good intention makes a mistake and does something horrifically bad or someone with a bad intention makes a mistake and does something extremely good?]. Even if through the peculiar disfavor of fate, or through the meager endowment of a stepmotherly nature, this will were entirely lacking in the resources to carry out its aim, if with its greatest effort nothing of it were accomplished, and only the good will were left over (to be sure, not a mere wish, but as the summoning up of all the means insofar as they are in our control): then it would shine like a jewel for itself, as something that has its full worth in itself. Utility or fruitlessness can neither add to nor subtract anything from this worth [To the extent that you disagree with this last sentence, you’re not a deontologist]. It would be only the setting, as it were, to make it easier to handle in common traffic, or to draw the attention of those who are still not sufficiently connoisseurs, but not to recommend it to connoisseurs and determine its worth.”

Make no mistake. This is the first few pages of Kant’s book, Groundwork for the Metaphysics of Morals, and the axiological foundation of his ethical philosophy. Kant’s morality and legacy are built upon this single, excerpt, as are mountains of theory and ideology.

Second, let me touch on this Kantian notion of treating persons as ends and never as means. That sure sounds nice — it seems bad to use people. But think about it more. By this, Kant means that we can never violate or use a person ever as a means to anything regardless of circumstance even if that end would involve protecting that very person. By “treating persons as ends,” Kantians really mean (and I’m not arguing in bad faith here — this is not disputed), never treating a person as a means for any reason ever (if you’re wondering what precisely deontologists mean by “means” that’s a great question that they tend to struggle answering). But how exactly are we to respect people, their freedom, and their rational agenthood by just never treating them as means? Would it not make more sense to treat the consequences pertaining to the welfare of them and others as ends? Here’s a canonical example. Imagine there’s a murderer at your door who wants to kill your friend who you know is hiding in the closet. The murderer asks you where the friend is. I think it would not only be immoral, but insane to tell this murderer the truth, but Kantians hold fast and say that you must never lie because that would result in something they call a “contradiction in conception” (which I won’t take the time to explain here).

It’s well understood that Kantianism mandates telling a murderer where your friend is, but I don’t think it’s well appreciated. Kantians don’t seem to mind that telling the truth would cause your friend to die and result in their agency and person being violated. Kantians may profess to care about this person very much, but I struggle to see how any attitude that leads someone to prioritize keeping a lie from coming out of their mouth above the life of another person could be construed as “caring”. To emphasize just how much Kantians don’t care about consequences, here’s another example. Let’s say that some ultra-advanced aliens, for whatever reason, come to you and honestly give you two options. First, you can lie to your friend about something important, and if you do so, the aliens will rid the world of poverty, disease, hunger, war, and existential risks. Second, you can refuse to lie to your friend, and if so, these aliens would torture and murder you, your friend, and everyone else on the planet. This is a more extreme example than the murderer at your door, but it’s exactly the same type of problem. Should you refuse to lie? Should you cause that much pain, death, and suffering for the sake of keeping your hands clean and your sacrosanct intentions pure, or should you lie once and effect an immense amount of happiness. Let this serve to emphasize that to the extent that someone is a deontologist, they do not care about any consequences affecting any person at all — only about keeping their actions inside a set of arbitrary bounds.

Harvesting Organs

This is really just one of many variants of the trolley problem.

Sometimes when people are arguing against consequentialism, they’ll present an example about a situation in a hospital. Let’s say there are 5 people who are simultaneously dying of a different type of organ failure. Suppose that you could let them die, or you could, with certainty of success, take some random person, kill them, and use their organs to save the 5. Most people would say this is wrong, but I think they’re guilty of just using their system 1 and either choosing based on speculation about what would happen if this rule were used widely (subverting the premise), or tangling up notions of uncertainty with the success of surgery or stigmas of killing someone with the question at hand (missing the point). But this is just using feelings instead of thinking to judge the situation. I think we need to step back, think functionally, keep our values in sight, do the math, and realize that 5 is greater than 1 and always will be no matter what intuition-manipulating situations we disguise this tradeoff in. Also, it’s worth considering: even if you react negatively to the 5 for 1 organ transplant situation, how would you react to a billion for 1 transplant situation?

Guess for Success

Another argument people use against consequentialism is that it’s hard to predict things. This is flawed. It misses the point entirely and even fails on its own grounds. First, just assume that we could never predict anything ever. This would not imply that consequentialism is wrong. It would only imply that consequentialism is impractical. This doesn’t say anything to the effect of showing that consequences don’t matter. Second, this is a bad argument in any case. Prediction with limited information can never really be perfect, but that doesn’t mean we can’t do better than random even when considering the extended consequences of an action ad infinitum. Grant me two things: first is that consequences in the near future are tractably predictable. For example, I might know with near-certainty that I could make my friend happier if I helped them with a task they might have difficulty with otherwise. And second, grant me that good consequences are generally more likely to lead to more good ones than more bad ones. For example, lifting someone out of poverty makes them less likely to strain social services or commit crimes and more likely to help others. Given these two things, we can model the chain-reaction of the extended consequences of a positive action as a random branching/walking process that begins with a positive seed and whose branches are more likely to sprout off on the side of the line they started on. This will have a positive bias. And if this theory doesn’t satisfy you, you could also just consider the arc of human progress in the past few thousand years. Thanks to technology and development, haven’t things generally and predictably been getting better?

Don’t be a Little Bitch

And one other argument that people will sometimes use against consequentialism is that it’s too demanding. There’s always going to be more litter to pick up, and we’re never going to run out of moral obligation, so consequentialism must be a bad theory. But this isn’t much of an argument against consequentialism — it’s brooding over the idea of having too many moral obligations. Who ever said this morality thing had to be easy? If someone’s afraid of concluding that they have obligations, I think they’re doing this whole ethics thing wrong. Morality isn’t about inventing a set of standards that we can easily follow in order to feel good about ourselves. It’s about doing the right thing.

Hitler, Lynchings, and Drunk Driving

Now let’s take a quick tangent. No discussion about consequentialism would be complete without talking about punishment. There are two main philosophical approaches to it. The first is the consequentialist one, saying that punishment is only good inasmuch as it is useful as a way to prevent people doing bad things. The second is retributivist, saying that punishments are needed to right wrongs and balance out the scales of justice. Needless to say, I’m on the consequentialist side. See my discussion on hedonism and the consequentialist thesis for the argument there. But I think it’s strange that some people think punishment and suffering are morally good even if not instrumental in any way. And sure, it makes intuitive sense, but I only think that’s because humans are social animals that need structure and punishment to form functioning societies, and we heuristically see punishment as simply the right reaction to violators of certain standards. I think the retributivist approach is really just contrived nonsense about some made up universal scales of justice meant to rationalize the social heuristic of punishment.

Consider some illustrative cases. First, let’s imagine that Hitler is reincarnated and is as evil as ever, but he’s stuck alone on a planet far away where he will never have contact with anyone ever again. And let’s even say that nobody knows about this but you. Would you rather that Hitler be happy or sad? If you’d rather that Hitler be sad, you might have a tough challenge of justifying why suffering for suffering’s sake is good. What end would this serve? Why would paying homage to some sort of scales of justice be anything that we could intelligibly value? How could we even begin to justify mongering constructs of such scales? Such scales objectively do not exist.

Next to test your thoughts on punishment, let’s imagine that you are a town sheriff dealing with an angry mob on the verge of a riot. You have two options. First, you could find an innocent scapegoat, blame them for whatever the mob is mad about, and allow them to be lynched. If you do this, the mob will disband and nothing else bad will happen. Or second, you could refuse to do so in which case the mob will riot, and let’s say, with certainty, will randomly kill multiple people (pick a number from 2 to a zillion). All else equal, what do you do? Again, I’m sure it’s obvious where I stand, and mine is at least a very coherent position. What’s hard is trying to justify the opposite. Why would killing one be worse than allowing multiple people to die? I think that most people who disagree with me would have some sort of justification involving how it’s categorically wrong to punish an innocent person. But why would it be comparatively less wrong to just let even more people die. I understand that intuitions in this can be strong, but I think that anyone who would let the mob kill multiple people need to think more about what side of the act/omission-distinction debate they think it’s most reasonable to take.

And for one last illustrative example, let’s say we have two people. One is a cold-blooded killer, and one is a generally nice person, but careless and prone to drunk driving. Let’s say that the expected number of people that the cold-blooded killer would kill over the course of their life is 1. And let’s suppose that the expected number for the careless person is more than 1 (again pick any number you’d like). And let’s also suppose that if either of these people were put into jail for life, they would never kill anyone. Conditioning on this information, and keeping all else equal, who would you sooner have spend their life in jail? I’m not going to beat a dead horse here — I think it’s a mistake to put the cold blooded killer in jail.

Aggregationism

The Aggregationist Thesis

Now onto the third pillar: aggregationism. The aggregationist thesis, I’d say, is that what matters is happiness itself and not anything about the distribution of happiness. I have another blog post in the works touching on aggregation a great deal (I since posted it — see here). But let me discuss it in a more topical way here. Let’s say that we accept hedonism and consequentialism. We don’t yet quite have utilitarianism because we may value happiness and want to effect it, but we may want to do so according to some special distribution. Utilitarianism would say to just maximize welfare, but what I will broadly call egalitarianism would say to sometimes prefer distributions that reduce disparity at the cost of a lower overall total.

Why I’d Like to Gamble Against John Rawls

Why would anyone ever want to do this? Well, John Rawls is the go-to egalitarian philosopher. He argues that in a society, welfare should be distributed across all people in a way that would be agreed to by everyone were they made ignorant of their social circumstance. This is called the veil of ignorance, or original position argument. He says that the only rational thing to do would be to say that society should be roughly equal and that any inequalities that exist must exist because they are of advantage to the least well off. For example, it’s okay for doctors to have lots of education because they use it to help people.

John Rawls is highly influential, but I’m not going to sugar coat the fact that his argument is just really bad. Take this as an example. Imagine that we had a rational, self-interested agent, and we presented them with two buttons. Button A, if pressed, would give them 100 units of happiness with 99% probability and 0 units with 1% probability. Button B would give them 10 units of happiness with 100% probability. They are only allowed to press one button once. What would they do? Surely, anyone rational would understand the concept of expected value, do the easy math, and recognize that the expected (average) value of pressing A is 89 units of happiness higher than pressing B, and they would take the gamble accordingly. Now what if this person were in behind a veil of ignorance, and instead of pressing a button, they had to choose a society. Would they choose one with 99 people that are each 100 units of happy and one person with no happiness, or would they choose a society with 100 people who are each 10 units of happy? Rawls’ mistake is egregious. He argues for utilitarianism, not egalitarianism. Objectivity is good, and a central tenet of utilitarian reckoning — not an excuse to be irrationally risk-averse.

I think egalitarian intuitions are explainably wrong. Humans seem to naturally have some sort of propensity toward fairness. So do monkeys. Our sense of morality stems from our cooperative nature as a species. Built into cooperative social norms must be a sense of fairness and some quid pro quo. Nonegalitarian proposals undermine this, so it should make sense that we have an intuitive ethical sense of equality. But if we’re going to try to make this whole ethics thing anything more than a useless process of rationalizing intuitions, we need to be in the habit of putting away biases thinking clearly.

Feed the Sensual Monster

Some people will argue for egalitarianism and against aggregationism with the example of a utility monster. Imagine someone who was somehow so evil and hedonically acute that they gained more happiness from the death of another person than that person would lose. Should we just feed this monster and let everyone else die? (Neglect the expected value of progeny here.) According to our intuitions, probably no. But according to expected value theory and utility maximization, yes. Seems awful? I don’t care. This type of hypothetical situation is so strange and unlike the real world that our intuitions can only be expected to fail us miserably. Utility monster stigma isn’t a reason — it’s a conclusion masquerading as one. To moralize against it, we’d need something better than “This would…uhhh…clearly be bad.”

The Not-at-All-Repugnant Conclusion

And now for the “repugnant conclusion:” something very euconoclastic. The example goes as so: imagine any great world full of a lot of happy people. Let’s say for example that this world has 10 billion people who are all 100 units of happy (super happy). If aggregationist utilitarianism is right, then any such society would be considered worse than a society where everyone were less happy (say 50 units each) but there were many more people — enough to compensate (say 30 billion people). This can be extrapolated to the point of saying that worlds filled with ridiculously large amounts of people whose lives are barely worth living are better than any sort of world resembling the one we started with. I’m puzzled why people (or at least Derek Parfit who coined the phrase) find this so repugnant. It’s just saying that the number of lives matters in addition to how good they are. I speculate that concluding in the other direction would be just as unpalatable to most people’s intuition than the repugnant conclusion. The opposite of the repugnant conclusion is the idea that a world with a single person who is 101 units of happy would be better than the one we started with. In the middle is a set of complicated objective functions that couldn’t hope to be anything more than completely arbitrary (Derek Parfit tried and admittedly failed to justify such a theory). I think saying that the world with a single person who has 101 units of happiness is a fine conclusion, though not my preferred one (see next paragraph). But I think people’s intuition might not like that either. I don’t want to sound like a broken record, but intuitions about scenarios (especially extreme ones that involve numbers bigger than our minds evolved to grapple with) are practically just ways for bad philosophers to sneak their cognitive biases into sophistical conclusions and are awful reasons for adopting moral principles. Shut up and multiply!

Average or Aggregate?

One competing theory for aggregationist utilitarianism is average utilitarianism. It’s to say that what matters isn’t the sum total of happiness minus pain, but the average happiness of everyone because average happiness best reflects what exactly is experienced. Here’s an example (though I’m not using this as an argument. I’m just using this to illustrate the point of view): Imagine again a world with 10 billion people that were 100 units of happy each. The average utilitarian would, by appealing to the higher average, say that a world in which 100 people were each 101 units of happy would be better. And I think this is a perfectly reasonable viewpoint but I ultimately stay on the aggregationist utilitarian side of the line because first, I just like the appeal to a greater aggregate amount of happiness and the aggregationist thesis (there’s admittedly not much of a deeper justification for me here), and second, I don’t see any good reasons to ascribe any moral significance to anyone’s self (except maybe mine because of solipsism, but that’s not my focus here). Concepts of selves in general are just philosophically problematic. I really think it’s a hard sell to work with some notion of selves. Theories of consciousness are notoriously difficult, but whatever theory of consciousness involving information, integration, conceptualization, computation, feedback, etc, that we use, I don’t see how we could coherently say that there’s some tractable concept of a self somewhere — even at the organismal level.

Separation, Augmentation, Fission, and Fusion

Consider some puzzling cases. First, consider separation. We see remarkable separation of mental faculties when connections between a person’s brain hemispheres are severed. See here and here to learn more. What happens to that self?

Second consider, augmentation. Imagine that connections between my brain and another computer were built that enhanced my mental faculties somehow. What would happen to my self?

Third, consider fission. Imagine a person who has their brain hemispheres split, replicated, and reassembled into two brains (presumably while conscious). What happens to that self?

Fourth, consider fusion. Imagine two people’s brains somehow fusing together into a single composite (or just two people’s brain hemispheres being recombined). What happened to their selves?

I think that in at least some of those four cases — separation, augmentation, fission, and fusion — you found how to think of selves as puzzling. And maybe you, like me, are convinced that experiencing each of these cases would probably have some phenomenological valence. I think that there would be an experience associated with each of these things. But would some self really be experiencing it? I don’t think so. I don’t think selves experience things. Instead I think that integrated experiences create illusions of selves. And this might seem a little weird, but we shouldn’t be afraid of this conclusion. And if it’s right, average utilitarianism suddenly becomes much more difficult to justify. Aggregationist utilitarianism, however, is just as coherent as before. It doesn’t care about any self, just about summing over anything hedonic.

Total or Prior Existence?

And there’s one last thing with aggregationism I want to touch on: total versus prior existence utilitarianism. A total utilitarian view says to straightforwardly maximize happiness. A prior existence view says that we don’t need to try to maximize the number of future persons we have. (Prior existence utilitarianism is similar to average utilitarianism, but it doesn’t say that killing the saddest people in society is necessarily good.) A prior existence utilitarian says that we need not bring lives into the world for the sole purpose of making them happy because those people don’t exist so they can’t be harmed by never being created. For example the prior existence utilitarian would say that it’s not morally good to euthanize one person in order to bring two equally happy people into the world and the total utilitarian says it is. The prior existence view intuits well to most people until they consider the opposite case. If it’s not a good thing to actualize potential lives in order to make them happy because those individuals to not presently exist, then it would also seem to not be a bad thing to actualize potential lives which we know would be miserable and not worth living also because they don’t exist. See the problem? (A problem in light of the hedonic thesis — not our intuitions!) The total utilitarian position may not be intuitive, but we shouldn’t be scared of conclusions like these. (And total utilitarianism does not imply that it’s a moral obligation to have a bunch of kids. There are complex counterfactuals to consider.)

Act Versus Rule Utilitarianism

This doesn’t fit anywhere else, so I’ll talk about it here. Sometimes people make a distinction between two things called act and rule utilitarianism, and I don’t think they should. Think of act utilitarianism as just utilitarianism, and think of rule utilitarianism as utilitarianism with an emphasis on the fact that it’s useful to follow rules and heuristics a lot. No act utilitarian disputes that heuristics for acting (like not killing people) can be useful, and no rule utilitarian treats rules of thumb as sacrosanct. If they did, they’d be a deontologist by definition. There is no philosophical debate to be had here. Any time that an act utilitarian and a rule utilitarian disagree, it’s a technical disagreement, not one of principle.

Wrapping This All Up

I feel like that gets most of my thoughts down on utilitarianism and its three components: hedonism, consequentialism, and aggregationism. There’s one more thing which I’d consider a major part of the debate which I haven’t talked about. That’s the debate over negative utilitarianism and standard utilitarianism. Negative utilitarians come in three flavors: ones who think that only negative feelings matter, ones who think that nothing positive can outweigh a negative feeling if it exceeds a certain threshold, and ones who think that positive and negative feelings are simply asymmetric and that more of the scale lies on the negative side of the scale than the positive one (there are also positive utilitarian counterparts of each of these). I find threshold and asymmetric negative utilitarianism interesting, but I’ll leave discussing them for another blog post (I’ve since written that post which can be found here).

Euconoclastic (ajd.) \yu̇-ˈkä-nə,-klast-ic\: iconoclastic in a good and virtuous way. Find me at stephencasper.com.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store