Cracking the Moral Code

Feroze Shah
2 min readMay 13, 2018

--

Photo by Blake Connally on Unsplash

Much of the discussion on morality and Artificial Intelligence has been centered on notions around how best to “code” morality into machines that will eventually be tasked with making decisions that affect the lives of humans. But within this conceptualization lies the implicit assumption that morality is something that can be crystallized, or at the very least, approximated and modeled in a consistent way.

In many ways this exercise in attempting to choose a “morality” for machines necessitates an unrealistic understanding of the complexity of our own moral convictions. But the inability of machines to match us in this regard is not so much the result of a higher form of intelligence that they cannot yet achieve, but because they do not share our limitations.

Human inconsistency in applying “rules” to moral dilemma is well established. Our varying reactions to the trolley problem when the framing is modified are perhaps the most widely cited examples of this seemingly irrational and inconsistent behavior. But there is, in fact, method to our madness. Much of our moral interpretations, and the inconsistencies that come with them, are built on context, emotion and a flawed memory.

We consistently update our moral reactions to current events by reframing them in context of very specific past incidents that we feel are fair parallels. More importantly, we need to have flexible moral codes to account for our other human traits that cannot follow rational paths. For instance, in order for us to love someone it is important that we reframe events in a way that justify their actions, or prioritize their well-being, to the maximum extent possible. Our like and dislike of people prompts reactions of aggressiveness or defensiveness in our moral judgements.

At other times we are simply inconsistent because we cannot recall our own perceptions and categorizations of previous incidents due to our imperfect memory and ability to recall. For us, it is perfectly legitimate to have inconsistent applications of moral codes to two separate incidents because a different equation applies to each one altogether, rather than the adjustment of a few variables.

Machines share none of these “shortcomings”. Coding a consistent morality would be difficult enough as it is, but in this case, it would require significantly changing the meaning of what “morality” means to us as humans. As a result, for machines to share our morality, they must intentionally be made more “flawed”, not necessarily more intelligent. As much as we like to think of ourselves as the pinnacle of consciousness and intelligence, ironically, it is our imperfections that define our humanity.

--

--