Stephen Casper, thestephencasper@gmail.com

Euconoclastic blog series

Put on your moral-philosopher cap and suppose that we all agreed that a good ethical system should place value on people’s welfare. For now, a precise definition of welfare isn’t needed, but let’s suppose that it has a lot to do with happiness and lack of suffering. That seems reasonable and workable enough, but when it comes to acting off of this hedonistic type of value system (or a similar one), we also need some sort of concept for how to best distribute welfare between persons.

So how should we think of the value of a social arrangement as a function of welfare’s distribution? When we formulate a function like this to optimize, it should reflect goodness and badness — desirability and undesirability. Our welfarist assumption gets us a long way in defining that, but there still might be more to morality than just maximizing whatever we think is good over everyone. Many people value equality to some degree, so here, I want to look at the idea that equality between persons for equality’s sake is might be good.

A good default moral principle here would be that egalitarian considerations don’t matter at all and that we should only care about aggregate welfare: aka — utilitarianism. We can write the utilitarian distributive objective function as such:

Where everything is everything (it doesn’t matter what units we use — nations, communities, people, brain regions, neurons, synapses, something else, etc. because we only care about net positivity), and H is net happiness for each unit we sum over.

But let’s consider egalitarianism as an alternative. I’ll use egalitarianism broadly to refer to any objective that tends toward less utility and a more even distribution than utilitarianism. And here, I intentionally mean to use “even” in a colloquial, vague sense. Given this, a potential “egalitarian” function might be to make every person as equal as possible:

Another egalitarian view would be to maximize the product of all nations’ average happiness (assuming they’re all positive):

Another would be to make things best for the synapse that is worst off:

Or perhaps we could aim to maximize the least happy first quartile of all brain stems:

We could go on. But when we consider egalitarian functions in general, how do they compare to the utilitarian one? One thing that we can immediately notice is that a utilitarian objective will, by definition, lead us to states with an equal or greater amount of total welfare than any egalitarian one. I think that it’s fair to consider this a legitimate advantage of a utilitarian objective over an egalitarian one. In a sense, saying so is circular, but at the point that we recognize that welfare is in some sense desirable, then we could appeal to the goodness of what we define to be good (such as happiness itself) instead of the maximization function. Could we at least think of more good as being a tie-breaker? Even if we held egalitarian sympathies, wouldn’t it be reasonable to say that a pareto optimal system could reasonably be considered to have some sort of advantage compared to a pareto suboptimal one? I think the answer to that is a solid yes, so it’s probably fair to say that the utilitarian objective has an advantage over egalitarian ones in this respect. And because of this (and its simplicity), I propose that we should treat utilitarianism’s aggregative objective as the default principle. If we are to reject it in favor of an egalitarian one, we’d better find a coherent reason to trade utility for equality.

Human beings tend to like equalness. This shouldn’t be surprising. Our sense of morality stems from our cooperative nature as a species. Built into cooperative social norms is be a sense of fairness and some quid pro quo. Nonegalitarian proposals undermine this, so it should make sense that we have an intuitive ethical sense of equality (also see my second to last paragraph of this post). But rational morality is under no obligation to appease our primalities. We need to be in the habit of putting away first-glance biases and using our heads.

And we might find that egalitarianism, while intuitively appealing, is awfully arbitrary and lacks real reasons to prefer it…

What is probably the most influential argument in favor of egalitarianism comes from the philosopher John Rawls. He posited that society ought to be organized in the way that a self-interested, rational agent behind a “veil of ignorance” in “original position” who didn’t know what position in a society they would occupy would organize it. Rawls asserted that a person in original position would want a society organized such that it were best for the person who is worst off and any inequalities would be to the benefit of those on the bottom of the disparities.

Rawls’ ideas were prolific, but also problematic. They do little to appeal to sound values and instead just hacks people’s egalitarian intuitions. Imagine that we had a rational, self-interested agent, and we presented them with two buttons. Button A, if pressed, would give them 100 units of happiness with 99% probability and 0 units with 1% probability. Button B would give them 10 units of happiness with 100% probability. They are only allowed to press one button once. What would they do? Surely, any rational agent would understand the concept of expected value, do the easy math, and recognize that the expected (average) value of pressing A is 89 units of happiness higher than pressing B, and they would take the gamble accordingly. But what if this person were in original position, and instead of pressing a button, they had to choose a society. Would they choose one with 99 people that are each 100 units of happy and one person with no happiness, or would they choose a society with 100 people who are each 10 units of happy? Rawls’ mistake is egregious. He argues for utilitarianism, not egalitarianism! Objectivity is good, and a central tenet of utilitarian reckoning — not an excuse to be irrationally risk-averse.

Beyond Rawlsianism, there’s another problem with egalitarianism that I’d like to discuss, and I believe that this argument is fairly unique based on the fact that I’ve never come across anyone discussing or writing about this before in a similar way.

I alluded to this above: a question that egalitarians need to face is what they choose to equalize around. It might seem obvious that the fundamental unit of morality is the individual (person, animal, or potentially machine). But not so fast. Why not equalize around synapses, neurons, brain regions, subnetworks, brain hemispheres, social circles, nations, or civilizations? I speculate that to most people, the individual seems to be the right unit to equalize around, and at first glance, a plausible justification for this could be that consciousness seems to emerge at the individual level. But this leads to some problems.

First, if we try to equalize around individuals, we would have to treat each individual of any moral status at all as being of equal worth. Consider a bipolar person, an average person, a severely mentally disabled person, a chimpanzee, an octopus, a chicken, a fish, a flea, a sponge, a tree, a bacterium, a virus, a protein, and a paperclip. Of those, pick whatever ones you think have any moral status at all. Would you consider all of those to be perfectly equal? How could we justify treating moral status as just some binary thing with a well-defined ingroup and outgroup without some ultra-arbitrary criterion? If you think this is a good idea, think of mentally disabled people, braindead people, regular fetuses, anencephalic fetuses, human-animal hybrids, higher animals, lower animals, other organisms, all varieties of intelligent machines, etc. until you find something in your grey area. What should we do about such beings? If we think that everything falls nicely into either a category of full moral status or zero moral status, we can be egalitarian. But how could we not be literally just making up criteria to make such a cut and dry delineation. I think because of this, it’s much more plausible to say that moral status comes in degrees. Pretending that there exist lines in the sand in the space of all possible individuals is not going to cut it.

Second, the idea that consciousness resides at the level of the individual (or even more precisely, their nervous system) is doubtful. Theories of consciousness are notoriously difficult and not really possible to scientifically verify. But whatever theory of consciousness involving information, integration, computation, feedback, etc, that we might use, I don’t see how we could coherently say that there’s something essential about the “completeness” of an organism’s nervous system that makes it conscious or not. From a materialist perspective, it seems that consciousness must be an organizational or functional property of matter, and that has nothing to do with what we socially consider to be an individual. So yeah, I think certain thinking modules probably have their own level of consciousness.

Yet, at least for me, there seems to be a sort of oneness to my mind — that my consciousness isn’t separable at a level lower than me or my nervous system. But I think the impression of inseparable selves might be an illusion. Consider some puzzling cases.

First, separation (see more here and here). We see remarkable separation of mental faculties when connections between a person’s brain hemispheres are severed. What happens to that self?

Second, augmentation. Imagine that connections between someone’s brain and another computer were built that enhanced their mental faculties somehow. What would happen to that self?

Third, fission. Imagine a person has their brain hemispheres split, replicated, and reassembled into two brains (presumably while conscious). What happens to that self?

Fourth, fusion. Imagine two people’s brains somehow fusing together into a single composite (or just two people’s brain hemispheres being recombined). What happens to their selves?

I think that in at least some of those four cases — separation, augmentation, fission, and fusion — you find how to think of selves as puzzling. And maybe you, like me, are convinced that experiencing each of these cases would probably have some phenomenological valence. But would some self really be experiencing it? I don’t think so. I don’t think selves experience things. Instead I think that integrated experiences create illusions of selves. And if so, egalitarianism suddenly seems much harder to work with. How could an egalitarian go about finding the right hedonic unit to equalize around amidst a zoo of different computational modules that do/could exist? But utilitarianism is unaffected by this criticism. It doesn’t care about any self, just about summing over anything hedonic.

But how important is it for the egalitarian to answer the question of what to equalize around? Very. If we equalize around the wrong unit, we can find that the value — or even the sign — of our objective function can come out differently because of Simpson’s Paradox (which is only nominally a “paradox”).

Absent a redeeming argument for the traditional veil of ignorance justification, the egalitarian paradigm stands on shaky ground And considering the absence of a justifiable unit to equalize around, I think that egalitarianism can be exposed as a surprisingly bizarre conclusion propped up by mistaken intuitions.

Next, I’d like to quickly give my take on a common argument against aggregation: imagine a utility monster — someone who was somehow so cruel and hedonically acute that they gained more happiness from the death of another person than that person would lose. Should we just feed this monster and let everyone else die? (Neglect the expected value of progeny here.) According to a normal person’s intuitions, probably no. But according to expected value theory and utility maximization, yes. Seems awful? So what? This type of hypothetical situation is so strange and unlike the real world that our intuitions can only be expected to fail us miserably. This isn’t a reason — utility monster stigma is a conclusion masquerading as one. To use this argument, we’d need to come up with something better than “This would…uhhh…clearly be bad.”

And before ending, I need to make what is probably the most important point I’ll make. Formally, an egalitarian theory of morality may be bad, but that doesn’t mean that egalitarianism and utilitarianism look very different. Remember diminishing returns: my millionth dollar gives me much less marginal utility than my thousandth dollar. Because of this, there’s a profound convergence between wanting to promote equality and utility. In the real world, the pursuit of more overall good and more equality happen to be very similar.

Nonetheless, if and when achieving equality and maximizing utility conflict, (such as rare cases in which one needs to be sacrificed to greatly help many), we should “shut up and multiply.”

Euconoclastic (ajd.) \yu̇-ˈkä-nə,-klast-ic\: iconoclastic in a good and virtuous way. Find me at stephencasper.com.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store