On Morality

A Derivation of Objective Morality

Does morality exist without God or gods? Is there any objective notion of right and wrong in a purely naturalistic universe?

Many people contend that there is no objective moral truth, that morality is relative to a culture, for example. They say we cannot pass moral judgment on another group’s behavior because there is no true and universal right or wrong. This is called moral relativism. It is an appealing and perhaps intuitive view for today’s liberals and secularists. It seems to promote diversity, inclusion and acceptance of all people by recognizing everyone’s personal truths.

Yet moral relativism, if true and taken to its logical ends, implies that anyone is justified (or at least not “wrong”) to oppress and subjugate other people. For if morality is relative, then who is to say it was wrong for the America of old to engage in slavery? Why should people fight for LGBT rights when for many religious Americans it is in direct conflict with their moral positions? Is it just a matter of moral colonialism, where we just have to fight other competing moralities and hopefully the more “modern” ones win out? But then why should we if it’s all relative? Can we say we’ve made any moral progress at all in the course of human history?

Clearly, moral relativism has problems. The alternative, of course, is objective morality. If there is an objective morality, a way of judging actions as right or wrong that is independent of humanity, then we can indeed have logically consistent notions of moral progress. We can say that society has, in fact, made moral progress from a past when slavery wasn’t given a second thought. Moreover, we can feel justified in pursuing moral positions knowing that what we are doing is the objectively right thing to do, and not just what feels right for me.

But what would objective morality look like? If humans have to invent objective morality, then it’s not really objective is it. If it exists, it must be truly fundamental, something independent of any uniquely human characteristics. It can’t, for example, depend on human empathy, since that varies considerably from person to person.

The Philosophy of Morality

There have been previous attempts by philosophers at describing an objective morality. Immanuel Kant, a German philosopher (1724–1804), said that a moral rule is one that could be rationally universally willed. For instance, a moral rule might be “do not lie” and so long as such a rule could be rationally (i.e. without self-contradiction) willed for all humanity to obey, then it is moral. However, the rule “lie whenever it helps you” would be immoral since it would lead to irrational self-contradiction when universally applied. As soon as you conceive of a world in which everyone lies to reach their goals, everyone else you encounter will likewise be lying and thus there will be no reliably true statements, and your rule is then self-defeating.

As appealing as this theory is, since it is based on reason alone, it unfortunately suffers from some major problems, namely that it doesn’t at all account for the outcomes of actions. To Kant, lying is always wrong in all circumstances because lying, when universally willed, would be irrational. Yet, we can all imagine exceptions to a rule against lying: if a serial killer knocks on your door wondering where your neighbor is, should you lie and tell him he went left when he really went right? Clearly the outcome of lying in this case would be much better than telling the truth and getting your neighbor killed, but to Kant, outcomes do not determine moral judgments.

The other main philosophical approach to morality, in direct opposition to Kant, is consequentialism, an umbrella term for any moral theories that determine the morality of actions based on the consequences of those actions. Here, actions are judged based on how good or bad the outcomes are. Take the previous lying-to-a-murderer scenario. For a consequentialist, the moral action in this case is clear, lie to the murderer! The outcome is far better than your neighbor being murdered.

But consequentialism suffers from many of the issues that Kant’s theory addresses, namely our intuitions about intentions. If someone intends to kill you, but his gun malfunctions and you get away, most people agree that this is still a morally objectionable intention regardless of whether any bad outcome actually occurred. We definitely don’t want this attempted murderer to continue to be allowed free in society since he is likely to succeed at some point. Additionally, consequentialism struggles to provide a satisfactory method of judging what exactly constitutes good versus bad outcomes. It’s easy to assume murder is a bad outcome, but why? What is the rational basis of that determination?

So is there any way to reconcile these disparate theories that both seem to capture two important sides of morality: intentions and consequences, while remaining firmly based on reason and rationality? I believe there is.

Inspiration From Nature

Virtually every conception of morality is based on the idea of every person following some set of rules, a code of conduct. For most secular philosophers, these rules are purely based on rationality, whether it’s Kant’s demand for rational universal wills to a consequentialist’s demand for good outcomes.

Even bee colonies could be said to have a sort of morality. Each bee has a moral code of conduct that supports the proper functioning and flourishing of the bee hive. If a single bee had a faulty morality where it killed the queen and stole all the honey, that colony would fail and the perpetrator’s own genes would be wiped out. Of course this morality only extends to bees of the same colony since they all have a lot of genes in common. In modernity, humans are not necessarily concerned with reproductive success and maximal gene propagation, although we’re still subject to the emotional baggage of our evolutionary past (e.g. empathy, xenophobia).

Yet objective morality could still be some internal code that each individual in a community uses to support the proper functioning of that community. And throughout history, humans have expanded the inclusiveness of their communities so that moral rules that once applied only to white males, for instance, now apply to everyone. However, this hypothesis has some issues: what does “proper functioning” mean? Seems like the problem that consequentialism has in defining what “good” means.

And objective morality can’t be any form of utilitarianism, in which the right thing to do is whatever maximizes happiness, since that is human-centric and therefore not universally applicable to all conceivable sentient life forms (it also suffers from many other philosophical issues). And again, it suffers from the problem of defining an objective notion of “happiness.”

But rational self-interest in the form of a cooperative system of individuals (like a bee colony) seems like it has merit as a moral theory. Although morality can’t be about survival, reproduction or what merely allows us to be happiest, perhaps it can be about the very nature of being a rational human being: we make decisions, we have projects and goals. Maybe for some people their goal is just to survive, for others it may just be happiness. Whatever one’s particular goals are, morality might be a code of conduct that allows all of us to have the best chance at accomplishing our goals. Indeed, this is roughly my project here: to derive a moral theory based on a rational self-interest in maximizing our freedom, in a sense, that has all the intuitive qualities about good and bad intentions and the consequences of acting on those intentions.

Foundations of Morality

Before we can ascertain the existence of morality, we need to define some terms.

Let’s start with a definition of what exactly morality is.

Morality is a determination of intentions and actions taken by agents with respect to other agents as right or wrong.

There are some things we need to unpack in this definition and I need to further define what I mean by “agent.” For now, just take agent to mean a human being with rational decision-making capacity. Let’s get an intuition for this definition. Imagine a single human inhabiting a planet with absolutely no other life forms; no other humans, no plants, no animals. Can this person do anything morally right or wrong? Let’s say this person is immortal to simplify further and rule out any possibility of suicide; in this world, I cannot think of any action, no matter how apparently disturbing it might be for the invisible onlooker, that could be described as immoral. Clearly, morality necessarily depends on actions that affect other people. Likewise, we can imagine this same lifeless planet with two people, but at opposite ends that never meet or are aware of each other’s existence. As long as they never affect each other in any way, nothing they do can be judged as moral or immoral. Now let’s tackle exactly what an agent is.

An agent is an independent being with the capacity for rational decision-making and the ability to make such decisions that affect the universe.

In essence, an agent is defined to be a being with the properties of sentience and intelligence that humans have. AIs would also qualify as being agents. Notice how I have a clause about affecting the universe in there (here I take universe to mean all of objective reality). This is because we don’t want to concern ourselves with imagined intelligent beings that merely receive sensory information but have no ability to move or cause any consequences in the universe whatsoever. A moral agent is then an agent that can not only affect the universe, but can affect other agents in some way. The intuition here is that we don’t morally judge a shark for killing a human because a shark (and other non-human animals) don’t have agency, i.e. the characteristics of being an agent, thus they are not morally culpable beings [*1].

Now that we have set up our discussion of morality, we need to start solving the real problem at hand: deducing a theory that allows us to determine the moral rightness or wrongness of actions and intentions. We’ll begin with an important principle of how self-interested rational agents ought to behave.

The Autonomy Principle: It is rational for an agent to behave in such a way that maximizes its autonomy [*3].

Here I am using autonomy to mean degrees of freedom (DoF) and I will use these terms interchangeably for the most part. Degrees of freedom are essentially the number of possible options you have in any given state of the universe. For instance, Pac-Man, who lives in a 2D universe, has at most, 4 degrees of freedom (the options to move up, down, left or right). In any particular state of the game, it may have as few as one option (e.g. if it’s stuck in a corner).

In this game state, Pac-Man has 2 DoF: forward or backward.

I think it is uncontroversial that Pac-Man would be better off if he had more autonomy. He is more likely to get eaten by a ghost if he is stuck in a corner with only 1 DoF. Given the objective of the game to collect all the pac-dots, Pac-Man must occasionally enter game states where he has a minimum of DoF, but in general, Pac-Man is more likely to succeed if it maintains a higher average DoF. Moreover, if Pac-Man eats a power-pellet (the bigger dots), it temporarily increases its DoF by making the ghosts edible. But let’s reason through why exactly it is rational to maximize DoF.

  1. An agent at any given state of the universe has some non-zero number of Degrees of Freedom (because having zero DoF is incompatible with being an agent).
  2. An agent always has some policy that determines how it behaves given the state of the universe. The particular policy may cause a resulting increase, decrease, or maintenance of the agent’s DoF.
  3. Let’s assume an agent’s policy says to always take actions that decrease its DoF. Such an agent will eventually decrease it’s DoF to zero, resulting in a self-defeating state.
  4. Thus, it is irrational for an agent to act in such a way that always decreases its DoF.
  5. Given the capricious nature of the universe and the second law of thermodynamics, the universe will tend to decrease the DoF of any given agent over time.
  6. Thus, the only rational policy is to maximize DoF in any given state of the universe so as to minimize the probability of catastrophic loss of agency.

Of course humans don’t live in a 2D Pac-Man world and degrees of freedom is less easily quantified as a finite numerical amount of options that we possess at any particular state of the universe. This is why I’ll tend to use the term autonomy when in the context of “real life” and degrees of freedom when it is easier to quantify in some of our thought experiments like Pac-Man. We can, however, recognize that Bill Gates, as one of the richest people in the world, has a greater number of DoF (more autonomy) than a slave. It is impractical to try to quantify the difference in DoF, but if you can accept that there is indeed a big difference in autonomy between these people, we can make progress in understanding morality.

I want to emphasize that in our discussion, we are only concerned with moral agents and moral decisions. While it may be irrational for an agent to make some decision that decreases its autonomy (e.g. suicide), this is not a moral decision since it, in most cases, will not affect other agents.

Intentions and Reality Models

An essential component to this theory is to address the morality of intentions. I noted earlier that intentions and actual actions appear to be subject to moral judgement independently. As I will argue here, however, this is not the case. All moral judgements are based on intentions. The consequences of any action in fact will depend on the context of the intention under which it was taken.

So far we have yet to conclude what exactly constitutes moral rightness or wrongness, that is in fact our ultimate goal, but for this discussion of intentions, just assume there are some basic moral principles that most people accept in developed modern societies (e.g. don’t unnecessarily kill people, don’t steal things, etc).

Imagine you find yourself in some horror film situation in which a man is stuck on a platform about to be dropped into a giant cauldron of acid. A timer is ticking down his certain demise unless you act to stop it. You find two big buttons, one labeled “Abort” and the other “Drop,” and in a frantic attempt to save him, you press the “Abort” button, since you have every reason to believe this button will abort the murderous contraption our poor victim has found himself in. Unbeknownst to you, however, the unseen villain has anticipated your arrival and purposely mislabeled those buttons. The “Abort” button is actually wired to the dropping mechanism, and the poor victim falls into the acid and suffers a horrible death.

Are you morally responsible for his death? That is, was your action in pressing the button morally wrong? We can recognize that you pressing the button did indeed cause his death (even if he would have died anyway), but intuitively, your action, despite the consequences, was not morally wrong; in fact it seems quite good. You intended to save his life; it is not your fault that the outcome was contrary to your intention, you did your best. Unlike the attempted murderer whose gun malfunctioned, we recognize society would be better off with more people like you, more people with good intentions.

Intentions are central to moral judgments. If you’re just walking along your normal route to work and you accidentally step on a loaded, volatile handgun that then fires off a round into a nearby house killing an old lady, we wouldn’t hold you morally accountable for that. We call events like this accidents.

Thinking about scenarios like these leads to an important conclusion: that all of our (morally relevant) actions are based on intentions, and that all intentions are based on models of reality. Every one of us necessarily operates within a model of reality. A model of reality is our approximation of how the universe works, and encompasses our understanding of causality and expectations about what will happen (or is likely to happen) when certain actions are taken. My reality model says that when I press the black button on my Macbook Air that is labeled “K” with this word document open, a letter “K” will be printed to the screen. My reality model is sophisticated enough to account for the rare instances in which this is not true. For instance, my cursor may not be focused on the document window, thus pressing “K” will not result in the expected outcome. But when this happens, I use my model of reality (the knowledge contained therein) to troubleshoot this problem.

Scientists and engineers create models of reality for a living. We have equations that model the motion of fluids, and supercomputers that run models of the climate to give us weather forecasts. But every lucid human being is also a reality modeler. Every bit of new knowledge you acquire about the universe is integrated into your model of reality to make it better, so that your expectations about the universe will be more accurate, and that your intentions will lead to more accurate outcomes.

No one has access to the complete state of the universe. No one receives all the sensory information necessary to make completely predictable actions. Our only recourse is to approximate the universe based on our limited knowledge.

In our horror film thought experiment, you thought that the “Abort” button would cause his life to be saved. That is, in your reality model, pressing the “Abort” button would save his life. Here we’ve stumbled upon another important conclusion.

An intention (an intended action) is a definite action taken in an agent’s reality model.

An intention is an imagined action taken within your reality model with the imagined outcomes occurring. You imagined that pressing “Abort” would save his life.

The Reality Model Principle: When an action taken within an agent’s reality model is instantiated in objective reality, it becomes a morally accountable action.

Alternatively, if in your reality model pressing the “Abort” button kills the man (this is an intention), then when you instantiate that intention in the real world (i.e. actually pressing the physical button), you have committed a morally accountable action and in this case it would be morally wrong even though the action taken was the same in both cases.

Our understanding of reality models, intentions and instantiated intentions (actions) gives us huge insight into how to make moral appraisals. Unless you’re an omnipotent and omniscient God (or Pac-Man), you can only have an approximation of reality as your model. The universe is too complex and our capacity too limited to fully understand the consequences of all our actions. But for all but the most trivial of actions, most of the time we have a fairly accurate reality model in terms of the immediate results of our actions, and thus most of the time we are fully accountable for our actions.

Moreover, understanding the reality model principle helps us interpret historical moral progress. Moral progress is our collective improvement of our reality models, leading to more rational decisions. Slavery used to be acceptable to millions of people. In their reality models, slaves were fundamentally different, sub-human and thus not subject to the same treatment as their overlords. As time progressed and our understanding of humanity improved, some people began to realize that there is no fundamental difference between slaves and masters. They had to update their reality model, and as soon as they did, they were morally responsible for continuing to maintain slavery if they did.

The reality model principle tells us how we should continue to make moral progress. We should learn as much as possible about the universe, about objective reality, so as to have the most accurate approximations of it as possible. Science and the philosophical field of epistemology now seem quite relevant to morality and moral progress.

The Role of Culture

Some of you may still feel strongly that morality is culturally-dependent, that one’s cultural environment defines what is rational. This is absolutely true. As we’ve noted, an agent immersed in a particular culture will have a culturally-derived reality model, and thus what is rational within his or her reality model may or may not be rational with respect to objective reality (it depends on the accuracy of the reality model). Some cultures with elaborate mythologies and supernatural beliefs will likely have very inaccurate reality models, but as long as they are behaving rationally with respect to their reality models, we cannot judge them as being morally accountable (or responsible). We can, however, judge the objective morality of their behavior.

Take, for example, a culture that amputates the limbs of every other child born. Let’s say that this culture does so because they believe that their god commands this based on their interpretation of some holy text from their ancestors. The members of this culture all have a reality model in which amputating the limbs of half of all newborns is rational. We cannot say that they are being knowingly immoral (with respect to their reality model), but we can say that their behavior is objectively wrong with respect to objective reality. That is, we can say they are not morally responsible for the irrational (immoral) actions. Equivalently, we say their intentions are moral but their instantiated intentions (actions) are objectively immoral. Remember, an action that affects other people is a moral action, and irrational actions are immoral.

If we have a more accurate reality model and can thus make the determination that their behavior is immoral, then we have a moral impetus to try to help them improve their reality models. Depending on your perspective, this may be called education or proselytizing. In any case, the important concept is that there are two types of moral judgements: moral accountability (or responsibility), which is whether or not an agent is behaving rationally with respect to their reality model, and judgements with respect to objective reality (i.e. is some behavior rational given a perfectly accurate reality model).

Applications

We have now developed enough of the pieces to complete this moral theory that is both descriptive and prescriptive. Namely, our last missing piece is what actually determines the rightness or wrongness of intentions of actions. So far we’ve stuck with some intuitive notions to let us build out other aspects of the theory, but we’re now faced with deriving a logical and precise notion of moral judgement. We’ll return to the notion of degrees of freedom (DoF) and autonomy.

I’m going to start by simply laying out a succinct version of this theory, including the final conclusions, and then I’ll work backwards in deriving it.

The Autonomy Theory of Morality:
It is moral for an agent to intend to act in such a way that maximizes the net autonomy of all agents for which its intention will affect. It is immoral for an agent to intend to act in such a way that decreases the net autonomy of all agents for which its intention will affect.

Here’s an example applying this theory: Person A kills Person B because Person A doesn’t like Person B’s pants. Is this objectively immoral? Yes, because Person A just decreased (to zero) Person B’s autonomy while doing nothing for its own autonomy. That’s easy. Let’s try the lie-to-a-murderer example. A man named John lies to a murderer named Freddie regarding the whereabouts of his potential victim named William. Is this lie immoral? No, because this action (lying) maximizes the autonomy of all agents involved. The murderer will otherwise kill an innocent person, thereby decreasing his autonomy to zero, whereas the murderer gains nothing in autonomy. Let’s make this more concrete and say in this example, each person has just 3 degrees of freedom (turn left, turn right, kill person).

Initial Degrees of Freedom tally:
John: 3, Freddie: 3, William: 3, Total = 9

Now let’s update the degrees of freedom for each alternative scenario.

Scenario A: John tells the truth and Freddie kills William.
John: 3, Freddie: 3, William: 0, Total = 6

Scenario B: John lies to Freddie, Freddie is arrested by the police and put in jail. William survives.
John: 3, Freddie: 2 (left, right), William: 3, Total = 8

8 DoF > 6 DoF, therefore Scenario B (lying to the murder) is objectively morally superior to Scenario A.

We don’t have to do moral calculations like this for every decision, of course; in the real world calculating exact DoF is near impossible. But as long as our reality models have an approximate understanding of the differences in overall autonomy within a system of interacting agents, we can make the best (moral) decisions.

It may seem that a difference of 2 DoF in this murder scenario is disturbingly small; we might expect to see a much more dramatic difference in the moral calculus of murder. In this particular example, we only looked at the ΔDoF (change in DoF) after a single event. If we integrate the ΔDoF over a longer period of time, the expected ΔDoF would be much greater. A known murderer is likely to engage in future autonomy-reducing behaviors, so having him in jail would prevent these future losses of autonomy. And of course, in our greatly simplified example, each person only has 3 DoF, in reality, everyone generally has many more DoF than that, so murder would result in a substantial DoF decrease (to zero).

Importantly, the magnitude of the ΔDoF with respect to some action taken does reflect some notion of the magnitude of the rightness or wrongness of that action. In virtually all judicial systems, murder is punished far more severely than petty theft because we believe that murder is “more wrong” than theft, and certainly has a much worse consequence. Similarly, the ΔDoF after a murder is significantly greater than after a lie or theft. Moral intentions are accompanied with a ΔDoF vector with magnitude and direction (increase or decrease in DoF).

Again, this theory only concerns itself with intentions. Yet it still accounts for better and worse consequences so long as agents have accurate reality models.

I’ve described the succinct version of this objective moral theory and even applied it to a classic moral dilemma situation. But we still don’t have a good sense for why it is rational (and therefore moral) for an agent to maximize autonomy of other agents. We’re going to solve that now, but to do so will require some unintuitive thinking.

The True Nature of Personhood

It is useful to think of a single human intellect as an instance of software running on the hardware of biological neurons. An artificial intelligence would certainly be software running on either traditional silicon-based computer hardware or something more advanced. Intelligence is just information processing. In theory then, I could download my intellect, that is, everything that constitutes me as a “self” and upload it to different hardware. Now there would be two of me. Which one of them is actually me? The first instinct is that the original, biologically based version of me is the real me, yet from an information processing perspective, it would be impossible to differentiate the two and both would perceive themselves as the real me. A very similar philosophical conundrum is the thought experiment in which we could atom-for-atom replicate myself at a different location. Which version would be me?

What makes a rational agent, say a human being, depends entirely on the physical composition of his or her brain and the activity therein. If you lose just one brain cell, do you still consider yourself the same person? Yes, and still yes if you lose quite a few. Some people suffer rather large losses of brain cells due a stroke or other neurological injury, yet we consider them to be the same person as before (but with perhaps some motor or sensory deficits) and they certainly experience themselves as being the same “self.” Certainly if a murderer in prison suffers a stroke with consequent loss of neurons, we don’t let him go free because he is a different person now, a different self.

If you consider the self to be more of the information processing rather than the physical composition of neurons, i.e. the pattern of electrical activity, then this too isn’t philosophically sound. If a person takes a mind altering drug, suffers an electric shock, etc, their neural activity is significantly altered and yet they are still considered the same person, the same “self.”

As we continue with these types of thought experiments, we discover there is no conceivable threshold of neurological difference across time and space that can define an individual agent (“self”). There is no logically coherent way to differentiate between myself and yourself. We can have a notion of similarity or relatedness between agents, but it is impossible to define a hard boundary between agents.

The strong conclusion is that there is no “me”, there is no “you,” there is no “self.” Self is an illusion. The perception of self is merely what it feels like to be an independent information processor aware of its own information processing. There is no unique cosmic “self” that transcends biology or physics. More mathematically, there is no property that could conceivably define a “self” that persists through time and space. Despite our continued experience of our “self” through time, our brains continually change in physical shape, composition and activity.

The Agent Equivalency Principle: There is no mathematically invariant property between rational agents. [*2] It follows then that all beings that qualify as agents are equivalent with respect to the properties that make them agents.

Corollary: Since all agents are equivalent with respect to their agency, all agents are morally equivalent.

This leads to an interesting conclusion: since there is no real difference between agents in terms of what makes them morally relevant beings, the only thing that really exists as a uniquely definable mathematical structure is agency itself.

Let’s briefly consider the objective theory of morality in the context of the teleportation paradox. If we could exactly copy a human, perhaps named John, atom for atom to another location, certainly both would claim to be the real John. So John 1 and John 2, recently cloned, are atomically identical. If John 1 intends to kill John 2 in his reality model (an imagined action), John 2 must necessarily be intending the same thing, and thus they would kill each other in John 1’s reality model. It is a self-defeating intention. Thus it is irrational for John 1 to intend to kill John 2; and since this decision to kill John 2 would affect another agent, it is a morally relevant action. An irrational moral intention is immoral. Suicide under this model is irrational but not immoral since it doesn’t affect other agents.

We’ve established under this moral theory that it would be irrational and immoral for one of two identical copies of an agent to kill the other. But what about less hypothetical situations of one agent intending to kill another non-identical agent? In this case, there’s no reason to expect both non-identical agents are intending to kill the other simultaneously (they have different brain compositions and activity); it’s less obviously self-defeating.

However, we’ve already stated through The Agent Equivalency Principle that there is no meaningful difference between any agents with respect to what makes them agents (one of those properties being rationality). Thus whether any two agents are atom-for-atom identical or not, as long as they are both agents, it would be irrational (and thus immoral) for one to kill the other in most circumstances. This is precisely because they are morally equivalent; any pair of agents are equivalent, so intending to kill another agent would be self-defeating since you are the other agent. Or less abstractly, you could be any other agent that you interact with, hence it would be irrational and self-defeating to kill another agent.

Any perceived “self” is not responsible for its instantiation and is equally likely to have been (or will be) instantiated in any suitable “hardware” (with whatever degrees of freedom that hardware provides). Any perceived self should behave such that any other agents could also be itself.

Putting it all together: all agents are rational by definition, rationality is an objective property, and from the Autonomy Principle agents ought to behave in such a way to maximize their autonomy (degrees of freedom), therefore any interactions between agents should be mutually beneficial in terms of autonomy given that all agents must behave in the same way under identical circumstances. In no ordinary case would it be rational for one agent to kill another, steal from another, etc, since if that were rational, all other agents would behave in the same way, which would be self-defeating (reducing autonomy for everyone).

As a consequence of the Agent Equivalency Principle, moral intentions must take into account impact on future agents. Agents that exist now are no more privileged than those that will exist in a hundred years. Humans living today, for example, are morally obligated to prevent catastrophic climate change that would severely diminish the autonomy of future generations. Of course, the practical limits of our reality models define the scope of moral judgments over long time spans.

Conclusion

An agent is not defined by a specific and exact composition of brain matter or the electrochemical activity therein, it is merely its self-awareness, rationality, and higher-order thinking (the properties of agency). Human beings and future AIs have these characteristics, and all agents are equivalent and indistinguishable mathematically and philosophically. The perception of a unique self is an illusion. All rational agents should behave in ways that are non-self defeating and hence ways that maximize autonomy when actions affect other agents.

Although this has been described here in somewhat technical terms, this derivation of an objective morality need not be so abstracted from our daily lives. The key points are that intentions matter, accurate reality models are important, and all of us are equal with respect to agency and morality. A useful heuristic when assessing the moral value of an intended action is to imagine yourself being equally likely to be instantiated as any other person in the collection of individuals for which your intended action would potentially affect. If your intended action would lead to a probable decrease in your autonomy when you could possibly be anyone involved, then your intended action is probably irrational, and therefore immoral.

Notably, there are no moral absolutes or rules in this theory. Unlike Kantianism, the morality of intentions and actions depend on the particular state of the universe, the options available to the agent in such a state, and the expected future states of the universe caused by actions. By analyzing and averaging over many types of states, we can develop moral heuristics such as “do not lie” or “do not kill” that apply as rules of thumb, but there are almost always states in which lying or killing is the most rational and moral action.

Whether or not this essay has convinced you that what I’ve presented here is a correct derivation of objective morality, I think we will make progress on this front, and we will one day have a complete, rational derivation of morality. And if objective morality is proven to exist, then all rational, intelligent beings in the universe should be capable of discovering it as well.

*Footnotes:

  1. This definition of agency could be made more rigorous with an understanding of the theory of computation and information processing. Human beings are, irrespective of processing speed or memory capacity, Universal Turing Machines. That is, with the help of some tools (e.g. paper and pencil), we are capable of computing any Turing-computable functions. Non-human animals do not have this property; they do not have the capacity to run any other programs other than the one they are genetically determined to run. Thus any beings with the property of being Turing-complete are agents; this includes human beings, and potentially aliens (if they exist) and artificial intelligences. Of course there are some other special characteristics of intelligence that we don’t yet fully understand that contributes to agency.
  2. This is a familiar concept in the mathematical field of topology, which deals with the study of continuous deformations of mathematical structures and their invariants. If we represent any two agents as a topological space, there is no mathematical invariant between them. One can always find a way to continuously transform one into the other.
  3. Here’s a slightly different formulation:
    a) The state of the universe changes through time, and these changes external to an agent may affect an agent’s autonomy (increase, decrease, or maintain) without influence from the agent.
    b) An agent may influence, to some degree, the external state of the universe, forcing it into a new state, which may change its autonomy (increase, decrease, or maintain).
    c) If an agent were to behave in such a way that does not maximize its autonomy through time in any given state of the universe, its autonomy will tend toward zero, degrading its agency.
    d) Thus an agent must behave in such a way to maximize its autonomy otherwise it will not be an agent.

References:

  1. http://plato.stanford.edu/entries/kant-moral/
  2. https://en.wikipedia.org/wiki/Immanuel_Kant
  3. https://en.wikipedia.org/wiki/Categorical_imperative
  4. http://plato.stanford.edu/entries/consequentialism/
Show your support

Clapping shows how much you appreciated Brandon Brown’s story.