Morality Doesn’t Exist

Adam Zerner
I. M. H. O.
Published in
6 min readSep 28, 2013

--

Morality doesn’t exist. It doesn’t have a meaning that is widely accepted. Everyone has a different opinion about what is “moral”. When a word doesn’t have a precise meaning, it doesn’t really exist. How can you say that it exists if there isn’t something that you could reliably map it to?

Let me explain.

“Should” Requires an Axiom

It seems pretty well accepted that morality is a description of the way that you “should” act. People often debate how you “should” act, but they don’t debate that morality is the way that you “should” act.

In order to determine how we “should” act, let’s examine the word “should”.

What does a parent mean when they say that, “you should get good grades”? They mean that you should get good grades SO you could get a good job SO you could make money SO you could live comfortably SO you could be happy. When people use the word “should”, they say that you should do A, in order to get B (or something like that).

B called an axiom. Every time you use the word “should”, there is always an underlying axiom. Most of the time, the axiom is implied. When a parent says “you should get good grades”, it’s implied that the reason for doing so is because it’ll make you happy in the long run. People don’t usually explicitly state their axioms, but they’re always there.

This is the way the word “should” works. You have an axiom, and you say that you should do A, in order to achieve that axiom.

Axioms are Arbitrary

There’s no “correct” axiom; they’re arbitrary. You could use whatever axioms you want. For example, I could say, “you should kill people… if you want to go to jail”. With the axiom of you want to go to jail, you should kill people.

So then… what are people arguing about when they argue about how people “should” act? I think that they’re arguing about what rules will lead to the happiest society. (Obviously there’s some variation; not everyone argues about this specifically. However, I think that a lot of people do, and I think that it makes sense to argue about this.)

You might think that this is a trivial point to make, but it’s not. First of all, people often argue endlessly without making any progress, because they each have different axioms in mind. Secondly, talking about the rules that’ll lead to the happiest society is a much more objective question than asking “what is moral”. Objective questions are easier to answer. You could make progress in finding out what the best rules are for making society happy. You can’t really make progress in finding out “what is moral”.

I said that “it makes sense to argue about the rules that’ll lead to the happiest society”. This begs two questions:

  1. Why happiness?
  2. Why the happiest society?

Why Happiness?

First, let me explain what I mean by happiness.

Imagine that you are a blank slate. To start off,you experience a certain conscious state, call it mind-state A. Then you experience mind-state B. You can now compare the two, and say which one was more desirable. Imagine that you now experience a third… mind-state C. You could now rank A, B, and C according to desirability. Extending this logic, imagine that you experience every possible mind-state. Now you could make a spectrum of the desirability of every possible mind-state.

Happiness = the desirability of your mind-state.

People might say that they care about other things independently of their happiness (other people, the world, science, progress). I’m not sure how to say this exactly but… they don’t.

Imagine a husband who claims to care about his wife. Now imagine that regardless of what happened to the wife, good or bad, the husband is completely unaffected. His mind-state is the same. It is completely independent of the wife. Does this person truly care about his wife? It seems that in order to care about something, your mind-state has to be altered by it.

So… what about a husband who is affected emotionally by what happens to his wife? What does he really care about? Well, we’ve established that his caring about his wife is conditional on his wife causing changes in his mind-state. If his wife didn’t impact his mind-state, he wouldn’t care about her. Furthermore, the extent to which he cares about her is directly related to how much she causes changes in his mind-state. So… if the husband’s caring about the wife is directly caused by the impact she has on his mind-state, what is he really caring about?

I think it’s been established that the wife is just an intermediate. She only matters because of her impact on his happiness. To say that she matters independent of that would have to mean that he cares about her independent of what happens to his utility, and that isn’t true(remember what happened when the wife had no impact on his mind-state…). You can’t say that you truly and genuinely care about X if the only reason you do so is because of it’s impact on Y. If that is the case, then it is actually Y that you care about.

Think about what it actually means to say that you care about something independent of your mind-state. Saying that would mean that it has some intrinsic value to you. So maybe you’d say that you’d want to preserve this thing that you care about, even if it means that you’re in the yellow section of the mind-state spectrum instead of the orange section. But how can you say that you “want” this? By definition, you “want” the orange section more than the yellow one. You could say that you “value” the state of yellow + thing you care about over the state of orange, but you can’t say that the former state will be preferable. By definition, it isn’t.

This is meant to establish that what we actually care about is our mind-states. Our happiness. Going back to the original question of why we should talk about making rules that give people happiness: because happiness is what people actually want.

Why Society?

The way I think about it is like this: I think that for the most part, people all care about each other and want each other to be happy. However, you care more about your friends and family than about random people in your town. And you care more about random people in your town than you do about random people in China. So why not talk about the question of what rules will lead to the happiest society, with the people I care about being weighted higher.

Imagine that there was an apocalypse and everyone in the world had to start over and create new rules for society. There you are gathered with your family, town, and China. You might prefer rules that preferentially treat your family > town > China. However, you could imagine that everyone else is thinking the same thing. Thus, you have to compromise, and make rules that treat people equally.

It may be preferable to you to have rules that give special treatment to the people you care about. However, because it’s unrealistic, you have no expectations of this. Think about how ridiculous it would be if you were in a discussion with someone about taxes and you say that you don’t like the governments tax policies because they don’t exempt you and your friends from taxes.

When you talk with someone about what “should” be done(politics, morality, economics…), you’re really in a situation similar to the one of rewriting societies rules after the apocalypse. You might prefer certain rules, but you know that other people have preferences too. Thus, knowing that you have to compromise and make rules that treat people equally, you talk about what “should” be done to accomplish this. I think the best way to do this is via the Veil of Ignorance.

Conclusion

It may seem uncomfortable to admit that there is no good and bad. No right and wrong. No should and shouldn’t. No “morality”. However, I think people implicitly have axioms of happiness for all these things. Something might be “good” because it leads to happier life for you. Something might be “right” because it leads to happier outcome for society. Once you realize that the axiom of happiness is implicit in your usage of these words, hopefully you’ll realize that not much has changed — you just understand and can use these words more precisely now.

--

--

Adam Zerner
I. M. H. O.

Rationality, effective altruism, startups, learning, writing, basketball, Curb Your Enthusiasm