One of the biggest advances in human thought over the past century or two is the recognition that many of our values and structures are much more arbitrary than we think.
We have come to recognize that different cultures often have different agreed-upon norms just as they have different strengths and weaknesses. There’s nothing objectively more sensible about shaking hands as a greeting instead of cheek-kisses or bowing; different cultures just agree more or less randomly on each of them.
But what about when you’re talking about things beyond just everyday customs? What about the objectivity of decisions of morality?
The most obvious problem with statements like “there is no objective morality” is that almost no one really acts out this belief in the way they supposedly mean. If someone did, most of us would say they’re somewhere between “kind of a jerk” and “a complete monster.”
An extreme example: Suppose I come to your house, burn it to the ground, kill your loved ones, and give you a horrible disease to go with your burns. Philosophical quibbles aside, just about everyone agrees this is a bad thing to do. Some extreme people might say my behavior is justified if you’ve done something horribly wrong, but no one really advocates doing this to innocent people for no reason (or is even willing to tolerate this).
In other words, we treat “don’t burn houses down” as more or less an objective moral principle, even while many of us actively claim not to have objective moral principles.
This seems a bit silly.
On the other hand, we’ve found that old norms that were almost as universal as “don’t burn houses down” have ended up being arbitrary, even stupid. For example, it was once common to justify slavery not only as tolerable, but as a moral good. A not-insignificant number of pro-slavery advocates said essentially, “Aren’t we helping show these savages how to live properly [that is, like us]?” With the exception of a few people who I doubt are going to listen to anything I say anyway, I think most would agree that these pro-slavery advocates were very, very wrong.
It is important to dig into the mechanics and details of why we believe what we do, particularly in a world where we’re interested in training machines to implement moral behavior on their own.
“But wait,” says the utilitarian or the virtue-ethicist or the natural-law advocate or the Christian or the Jew or the Muslim or the Buddhist or whoever. “I do have an objective principle that lets me judge whether actions are good or bad!”
And I hear you. I’m a utilitarian through and through. But I’m not sure I can justify my utilitarian beliefs themselves to any objective standard.
I can provide arguments for why they produce positive circumstances, but then I have to justify why a world that people like to live in is “better” than a world people don’t like to live in.
Most people, when asked to justify such a thing, will say, “Obviously it’s better to be happy than sad.” And they’re right: It is obvious to pretty much everyone.
Some people who are particularly inclined to logical justification, however, tend to start sputtering at this point, constructing elaborate philosophical frameworks full of complex jargon to justify themselves. They spend lots of time with people tied to train tracks in front of trolleys or easily murdered hospital patients.
But once you replace the jargon with its meanings, those frameworks usually come down to “obviously it’s better to be happy than sad” — but in a form more marketable to philosophy journals. And, in a sense, they’re also right to do this: It is important to dig into the mechanics and details of why we believe what we do, particularly in a world where we’re interested in training machines to implement moral behavior on their own. These mechanics are useful for handling difficult edge cases and generalizing to future situations we may not have encountered yet.
How do we resolve this conflict? Well, I’m not sure we do, but at a minimum we can put a name to it.
Instead of the traditional split between objective (dictated by principles outside an individual or their specific circumstances) and subjective (dictated by an individual’s whims and feelings), I propose adding contingent as a third option.
Contingent “facts” are arbitrary in the sense that they could have been different in a world not different from our own. But they have a kind of objective truth because we live in this world and not in some other one. Contingent moral principles are arbitrary in the sense that an action’s rightness or wrongness might have been different in another world. But the fact that we live in this world makes an action with contingent morality far more clear: In our world, it’s either firmly moral or firmly immoral.
For example, is it objectively wrong for me (an American) not to leave a tip for my waiter at a restaurant? Well, no, not really because other cultures have different norms around tipping, and I don’t think I can prove that they’re wrong to have those norms. But, at the same time, I recognize that my waiter’s pay is structured around the assumption that I’ll tip and my beliefs about their performance are encoded in my decision to tip or not.
In other words, “You should tip your waiter” is a contingent moral principle in American culture. It’s not objective from a large enough perspective, but within the local society I live in, it is a sensible moral principle. Other examples include “you should say ‘hi’ to someone you know when you enter a room” or “you should not bother a stranger on the bus.” Other cultures might have different norms around these things: “don’t bother someone who’s possibly busy” and “it’s generally okay to strike up public conversations.” Those norms have a real effect on the morality of my actions (at least from a utilitarian perspective) because they affect how my actions impact the people around me.
Similarly, does steak taste better than chalk? Well, yes, obviously it does. And yet this isn’t quite objective. Steak tastes better than chalk to us (well, to most of us) because we’re humans with particular evolved tastes. But many animals will intentionally lick various forms of stone for salt or minerals, and my guess is that doing so is somewhat pleasurable for them (otherwise they wouldn’t do it spontaneously). Many of those same animals would outright refuse to eat meat. In other words, “steak tastes better than chalk” is a contingent truth with respect to human beings. It makes sense to treat it as more or less objective with respect to humanity, but it breaks down once you try to go beyond that.
The existence of contingence creates a spectrum between objective facts and subjective judgments.
Here’s a more controversial example. Is the statement “white people are stupid” different from “black people are stupid”? I would argue they are different — but contingently, not objectively, so. They’re different because the specific history of the world we live in creates an asymmetry, not because the statements themselves are inherently different. If, in another world, history had played out precisely in reverse (with black slave-owners and white slaves, etc.), the morality of these two statements would switch as well. But because we don’t live in that world, there is a real asymmetry between the two — or at least I would argue there is.
In short, “X is contingent with respect to Y,” where X is a statement and Y is a circumstance or limited setting, means “X is true if you limit yourself to the setting Y.” In other words, X could have been false, but it happened to not actually be false within a particular setting. Y could be a culture, a history, a planet, etc.; the point is that Y is essentially arbitrary but X is not arbitrary once Y is specified.
(Alternate phrasing for those who like statistics: “X is contingent with respect to Y” means the probability of X given Y is close to 1, but the probability of X absent prior information is more moderate. In many cases, the statement isn’t even sensible without Y specified: It’s not really sensible to say something “tastes good” without saying to whom it tastes good.)
The existence of contingence creates a spectrum between objective facts and subjective judgments. “Objective” comes to mean “contingent on a scale much bigger than the one we’re looking at” and “subjective” comes to mean “contingent on a scale much smaller than the one we’re looking at.” In most cases, there’s not a bright line between the categories, but they’re useful anyway in the same way that “certain” is useful shorthand for “probability very very close to 1.”
Interestingly, physics already provides an example of this sort of thing in the form of something called “symmetry breaking.”
Consider a ball sitting on top of a tall hill. Nothing about this system is asymmetric: Both the ball and the hill are symmetric all around, and there’s no particular reason for the ball to fall one way or another. Yet the system is unstable, and so the ball will roll in one direction or another, essentially at random.
In the picture above, the first diagram corresponds to objective truths: the ball will always, in all possible settings, end up at the bottom of the well. There’s no difference between the left and right sides of the diagram. The second and third diagrams, however, correspond to contingent truths: once the ball rolls to one side, that side is distinguished. The underlying laws remain symmetric, but the system does not because its current state has become asymmetric.
It’s true that the ball could have rolled to either side, but the ball did roll to the right. It’s true that the statement “but it could have rolled to the left” contains some value, but only if we remember that it didn’t roll left.
What do we do with this?
Let’s start with a general observation: Most things are contingent if you zoom out enough. Cultural customs tend to be contingent on the level of a culture; big moral laws like “don’t kill your kids” tend to be (or at least appear) contingent on the level of a species. Zoom out far enough and you’re basically left with the laws of physics (which might be contingent on the level of our particular universe) or the laws of mathematics (which may be truly objective? but it’s hard to say).
But here’s a contingent fact: Human beings have a particular scope they live in. We live on Earth, not some other planet. We’re a particular species with a particular evolutionary history. We exist in a universe with specific physical laws. We (most of us) like the taste of steak, and we don’t like the taste of chalk.
So, when you’re operating in a human context, anything contingent on a scale greater than humanity is effectively objective. On the other hand, things contingent on scales smaller than humanity are effectively subjective.
This gives us an escape from the logical traps around morality and local norms. It’s true that the universe, which is at a much larger scale than humanity, doesn’t dictate moral laws for us. But moral laws govern humans and our immediate surroundings, and as a result, they can be effectively objective for our purposes. We can learn about them by studying the contingent context we live in.
The universe might not dictate human values — but humans do dictate them as the species we actually are, and within that context, our values aren’t arbitrary.
This context allows you to say “burning down my house and killing my family is bad.” There can be some detail to exactly why it’s bad: A utilitarian says it causes suffering, a virtue ethicist says good people don’t burn down houses, a Christian says thou shalt not kill, and so on. But these are just ways of formalizing what we already know to help us with harder decisions. The fact that they all come to the same answer in this and many other settings isn’t a coincidence — it’s a consequence of all of them existing within the same human context and therefore trying to formalize the same contingent truths within that context.
In other words: The universe might not dictate human values — but humans do dictate them as the species we actually are, and within that context, our values aren’t arbitrary. We like steak because we need protein in our diets, and we don’t murder each other because we’ve collectively agreed to structure our feelings to remove the urge to do so. Note that these aren’t really objective: We need protein because we have a particular evolutionary history, and we don’t murder each other because we have a particular socio-evolutionary history. These things could have been different, but they aren’t.
The universe doesn’t dictate our values. But our values are still useful, and the fact that they’re contingent on a universal scale doesn’t make them not meaningful on a human scale. There is nothing wrong with pursuing things beyond humanity, but we should remember that most of the time we’re dealing with humanity and that’s the context people care about most.
One important question remains: How contingent are things?
For example, the fact that fish have fins seems to be contingent. After all, it depends on their (incredibly complex and highly random) evolutionary history, right? And yet, dolphins — who are related to fish only at a great distance — have fins too. In other words, fins seem to have some advantage that is contingent on a much larger scale than simply the evolutionary history of fish. (Note that this doesn’t necessarily make them objective. Perhaps fins are easily evolved based on the common body plan of vertebrates, rather than being the best way to smoothly travel through water.)
This sort of thing happens all the time, and many apparently contingent things turn out to be more objective than they look. Game theory, for example, tells us that certain strategies tend to win and other strategies tend to lose, and so, in a context where only “winners” propagate (in other words, in a context where evolutionary principles apply), certain strategies become effectively objective.
On the other hand, some things that look objective turn out to be remarkably contingent, or even subjective, on the scale of human contexts. Language, for example, has common structure in some respects, like the existence of nouns. In other words, it is contingent on at least a human scale in those respects, probably because of how the human brain is wired. But in other respects, like pronunciation, language can vary greatly over the course of even a few miles: In those respects, it’s contingent on a small scale.
These are big questions, questions well beyond the ability of any one person to answer. Whole scientific fields exist around them, and that’s why this is an article about a way of thinking — not an article full of answers, but one to help you ask a new sort of question.
For myself, I’m particularly interested in the question of how contingent morality turns out to be, and that’s why I’m writing this article first. It’s one part of a framework I’ll need. But the framework is useful on its own, and I hope it’ll inspire some useful thought outside of what’s to follow.