Moral Reasoning and the Scientific Method

Morgan
CAUSE Community
Published in
9 min readJul 28, 2023

At the core of activism is this notion of a desire for unity. More specifically, the reason people come to feel so strongly about issues that they feel the overwhelming urge to act — to convince others of their views, and to effect systemic change — stems from the fact that different people adopt different moral frameworks. They have different opinions about how the world should be.

While this seems like an obvious statement, it is helpful in an era of unprecedented division to focus on what the process of moral reasoning actually looks like. I am a computer scientist, and so my instinct is to try to put precise definitions to this nebulous notion, and by doing so, prove that it has certain properties. To me, this makes moral reasoning itself easier to reason about, and I hope it might be able to do the same for others as well.

Another reason for this exercise is, I feel, to encourage empathy. Some issues, particularly those inspiring activism, have a tendency to be so divisive and to be argued so passionately that those on the opposite side see each other as wrong-headed or irrational. We can feel that their minds must operate fundamentally differently from our own. In this way, we often neglect the empathetic aspect of rhetoric, with each side simply becoming entrenched in the views they already held, and no progress being made. Here, it can be helpful to take a step back and try to see the wood for the trees — to see, in the abstract, where differences of moral opinion come from.

What is a Moral Framework?

In the endeavour to precisely describe the process of moral reasoning, it will be useful to first define the goal of such. In essence, a person’s journey of moral reasoning is the process by which they arrive at their favourite moral framework. Here we will define a moral framework to be some sort of decision process for determining, given how the world is currently, what actions should be taken. In mathematical parlance, we would call this a function from world states to actions.

Some moral frameworks have names. Loosely speaking, Utilitarianism is the name given to the moral framework corresponding to the decision procedure of “perform whichever action maximises the total happiness, or utility”. Deontology is the name given to a different decision procedure, guided by the notion that there are certain unchanging rules which you must always follow. For example, lying is always wrong under any circumstances.

There are, of course, pros and cons to both of these frameworks, and I largely won’t be discussing them here. Instead, the point I’ll make is that even if they don’t have names, every conceivable way of determining desired actions based on a state of the world is a moral framework. They form a continuum. The goal of moral reasoning therefore, is simply to find the one you like the best.

This raises a further question: What does it mean to like one moral framework better than another one? Does the goal of finding your favourite one even make sense? These sure are some juicy questions, and we’ll explore them in the next section.

Moral Frameworks: A Total Order

The goal of this section is to define what we mean when we say that we want to find the “best” moral framework. One thing to note before we get started is that we don’t necessarily need to come up with some objective metric of “goodness” just yet. This is lucky for us, as that is a task that has kept ethicists and philosophers busy for thousands of years. Instead, we will consider an arbitrary such metric, and discuss properties that any such metric must have, whatever it might end up being.

A natural-seeming way to decide which moral framework is best would be to rank them all, and pick the highest-ranking one. So, suppose we took a random handful of moral frameworks, would it be possible to rank them? It may seem like an obvious question — of course we can rank things in order of preference — but not every set can be ranked in this way. For example, we can’t rank the possible moves in a game of Rock Paper Scissors to find which one is best. Mathematicians call a set which can be ranked a “total order”, and luckily they provide us with some useful tests in order to know when a set is a total order.

Technically speaking, in order to hope to define a total order, you need both a set and a relation between elements of that set. In our case, this relation is the notion of one moral framework being “at least as good as” another one, for some (as yet ill-defined) definition of “good”.

In order to prove that we have a total order, we need to demonstrate the following properties of our set of moral frameworks and our “at least as good as” relation. Firstly, is the relation reflexive? In other words, is every moral framework at least as good as itself? This one is an obvious yes. Way to go, us 🙂.

Next, is the relation transitive. This means that if we like framework A as least as much as we like framework B, and we like framework B at least as much as we like framework C, does that necessarily mean that we like framework A at least as much as we like framework C? (Notably, this is the test that the Rock Paper Scissors example fails.) This one is a little trickier to justify, but I’d say that for the most part, this one is another yes.

The third property our relation needs to have is antisymmetry. This one is yet more subtle. In plain English, antisymmetry in this case would mean that if we like framework A at least as much as framework B, and we like framework B at least as much as framework A, then they must be the same exact framework. To put it even more simply, there are no two distinct moral frameworks that we like equally. To me, there seems to be no obvious reason why this should be true in general, and you can probably see how this might cause issues for our hopes of creating a ranking — how can we rank one moral framework above another if we like them both equally? However, we can cheat a little bit by saying that actually the elements of our set are not the actual moral frameworks, but are instead themselves sets of moral frameworks between which we’re indifferent. In other words, we want to create a ranking where tied positions are allowed.

The fourth and final question we have to ask in order to determine whether we have a total order is whether any two moral frameworks can be compared to one another. Can every framework be put somewhere in the ranking? This one again seems like a yes. It’s difficult to even interpret what it might mean for this to be false — to compare two moral frameworks and neither prefer one, nor the other, nor be indifferent between them.

With these four tests passed, we have a total order. We can assign numbers to moral frameworks, and the goal of moral reasoning can be precisely defined as simply figuring out which numbers to assign to which frameworks, and finding the framework with the highest number. It’s a maximisation problem. Of course humans aren’t literally assigning numbers to the moral frameworks they consider and comparing the numbers, but what we’ve shown is that any reasonable definition of moral reasoning is equivalent to this process, and that’s useful. In particular, it’s useful because this numerical process is one which mathematics is well-equipped to reason about.

I do understand the irony of this section, having spoken about taking a step back and seeing the wood for the trees, to now be deep in mathematical terminology and rigorous proofs of abstract properties. It’s worth again taking a step back and remembering why those questions are important. Namely, we have now shown that it makes sense to ask the question of how we go about assigning those numbers. We have shown that if we come up with a method of comparing two moral frameworks and being able to say that one is better than the other, we will have a way of finding the best one.

What’s the Difference Between Science and Mathematics?

It is common to believe that scientific reasoning and mathematical reasoning are very similar processes — there’s often a lot of maths involved in doing science, and mathematics is said to be an unreasonably effective tool for the natural sciences. However, in this section I will argue that when you strip them down to their fundamentals, maths and science represent opposite processes.

With mathematical reasoning (and arguably, logical reasoning in general), we start with a set of axioms. These are things we take to be true without question, not because we don’t care to find out whether they are in fact true, but because trying to do so would be missing the point. We know that they are true because we define them to be. Mathematics gives us the tools to use these assumptions to work out what else must be true as a consequence. If we had chosen some other axioms, we would be able to derive a different set of subsequent truths. The choice of axioms is, almost by definition, arbitrary.

This can seem unsettling at first. We all know “2+2=4” is a true mathematical statement, and seemingly not because of any choices or assumptions we’ve made, but because it just is, right? Well, not quite. Underlying modern mathematics is a set of axioms called the Zermelo-Fraenkel axioms of Set Theory. These particular axioms turn out to be really useful, and they’re useful because they allow us to define notions such as addition which map really closely to phenomena in the real world, and which conform to our intuitions. However, it’s important to remember that although these axioms in particular are incredibly useful, we could easily pick a different set of axioms, and derive a different set of true consequences thereof.

The scientific method, in some respects, is the complete opposite. In science we do not have the freedom to decide the underlying truths of the universe. Instead, the only information we have to go off of is our observations — the consequences of the underlying truths. We observe true things about the world, and it is, broadly speaking, the goal of science to figure out what the axioms of the universe must be such that we would make those observations.

In this way, science is like maths in reverse.

Having constructed this dichotomy, it’s (hopefully) interesting to consider where moral reasoning fits in. When comparing moral frameworks, are we mathematicians, extrapolating from our core fundamental beliefs, or are we scientists, trying to find a coherent ruleset which justifies our observed intuition of right and wrong.

For some people, particularly those who base their moral compass on religion, it might look more like mathematics, with the word of a God or Gods providing solid axioms.

For me personally, moral reasoning looks a lot more like science. If someone suggests a moral framework, Utilitarianism for example, I find myself making arguments consisting of counterexamples — observations such as “Utilitarianism can lead to the phenomenon of a happiness pump, which goes against my intuition of right and wrong, and so, for me, this is a flaw of Utilitarianism”. Furthermore, when I like a moral framework, it is not because it follows logically from a fixed set of beliefs I hold sacrosanct, but instead because following these frameworks, at least in scenarios I’ve considered, tend to result in what I intuitively believe are good outcomes.

So Where Do We Go From Here?

This entire post has been incredibly abstract. All of the subtleties and nuances of individual moral questions and positions have been largely ignored, and that’s on purpose. The reason for this choice is to make it clear that the line of reasoning we’ve followed is universal. The actual process of moral reasoning is fundamentally identical, regardless of the specific political or philosophical assumptions going into it. It is, in my opinion, an excellent example of what everybody craves when they are trying to convince others of their opinion: Common ground. It is through this recognition of the fact that our ideological opponents are performing the same underlying procedure as us that we are able to more precisely pinpoint where disagreement stems from. Thereby we can better empathise with alternative positions, and advocate more convincingly for our causes, and I think that’s pretty neat.

--

--

Morgan
CAUSE Community

Morgan (she/her) is a software engineer and Computer Science graduate from King's College, Cambridge.