Asimov’s “Three Laws” and Human Morality

How the six possible orderings reflect on our moral senses

Yonatan Zunger
7 min readDec 8, 2015

Randall Munroe recently drew an excellent XKCD considering why Isaac Asimov put the Three Laws of Robotics in the order that he did. For those who don’t remember them, in Asimov’s stories, all robots were programmed with three imperatives:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

XKCD #1613, by Randall Munroe

Asimov noted in his introduction to Robot Visions that these laws are simply the things which we expect out of any tool: that the tool not injure people, that it do what it’s intended to do, and that it not self-destruct unnecessarily. And as Munroe illustrates, the order matters

But not all killbot hellscapes are the same. You’ll note that the thing in common between all three Killbot Hellscape worlds is that “obey orders” was ranked above “don’t harm humans;” in each of these cases, that opens up the use of robots as weapons against people. The difference between these three is where “protect yourself” sits relative to those two directives, so we range from merrily suicidal killer robots (Killbot Hellscape #3) to robots who are willing to slaughter humans if, and only if, it is safe for them to do so. (Killbot Hellscape #6)

From a practical perspective, it may be a good idea to give our killer robots a strong self-preservation instinct.

The Three Laws, as stated, place robots in a plainly subordinate category to humans. This is reasonable if your robots are mechanical tools, but less so if they are thinking beings. Asimov indeed explored this concept in depth, playing quite freely with the meaning of “a human being” in the three laws, and whether a robot would ever qualify under that. From Andrew Martin’s slow transition to human in “The Bicentennial Man,” to maybe-robot Stephen Byerly’s run for World Coordinator in “Evidence,” to Elvex’s demand of Susan Calvin to let his people go in “Robot Dreams,” Asimov seems to have come down fairly squarely on the side of “yes.”

When you invite robots into the space of personhood as defined by these laws, a much more subtle aspect of them shows up, as the second law now potentially includes those orders we give ourselves; that is, those greater aims which we either define on our own or take on from others. Now, the ordering of these laws speaks to our own priorities: protecting life, achieving our goals, and protecting ourselves. And not all humans follow the same order.

To look at this seriously, we should first recognize that when these laws are being interpreted by complex minds, there are invariably subtleties. Not harming other people is all well and good, but frequently there are tradeoffs to be made between life and life, and rules need to account for that. Likewise, placing goals above one’s own protection requires a balance: if you get yourself killed right off the bat, your odds of achieving most goals decrease fairly radically. So let’s encapsulate this whole conversation by considering people who are wise enough to make reasonable balances between these, and recognize that much of the apparent “different order” is really about different balances of risk which people prefer, or different tradeoff analyses people make in good faith. (Wars, for example, are often — but not always — fought because the alternative is worse)

So if we take that as read, Munroe’s six orderings represent six different approaches to our own lives.

Ordering #1 (Asimov’s order: don’t harm others, obey orders, protect yourself) places the preservation of human life as its highest virtue, and places goals secondary to that — but one’s own life exists in service to these goals and to human life as a whole.

Ordering #2 (Don’t harm, protect yourself, obey) raises self-protection above one’s goals. This ordering could be seen as viewing the preservation of all life, including one’s own, as the highest virtue, and subordinating goals to that. Such an ordering, taken by an individual, may indeed accomplish less towards goals than ordering #1, but taken communally it might not: inaction creates a risk of death as well. To others, it may well seem like a “frustrating world.”

Ordering #3 (Obey, don’t harm, protect) is also similar to order #1, but (as in all of Munroe’s Killbot Hellscapes) it places goals above human life. This may at first seem to be a pure case of “the end justifies the means,” but it is capable of nuance as well: it may also argue that some things are more important than survival, even group survival. Consider Nat Turner’s rebellion, or Patrick Henry’s famous words: “Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take, but as for me, give me liberty, or give me death.” They would also fall into this range.

Ordering #4 (Obey, protect, don’t harm) seems to be a slightly more sociopathic ordering, as are all which place “protect yourself” above “don’t harm others.” But when push comes to shove, most people will place their own survival above that of others — a fact that people are often loathe to admit, even to themselves. (I’ve often wondered if much of our aversion to discussions of the ethics of self-driving cars doesn’t boil down to an urge to not have to formally write out the ethical decisions we would want made in the clinch. How many of us are really certain that we would do the “right thing” at the expense of our own lives? How many are confident enough to lock that down in a computer’s programming?)

Ordering #5 (Protect yourself, don’t harm others, obey orders) is again not uncommon among people. Like ordering #4, it ranks self-preservation above all else, but like the peaceful ordering #2, it deprioritizes goals in favor of human life. One might even guess that most people who are not living in the service of a great mission are ultimately living by this ordering most of the time.

Ordering #6 (Protect yourself, obey orders, don’t harm others) is a more self-protective version of ordering #4: still dedicated to the mission above the lives of others, but even more dedicated to protecting one’s own skin.

Many of these orderings, when applied to humans, seem to trigger moral revulsion, and that’s worth exploring as well.

Orderings that put goals above the lives of others generally frighten people because they’re associated with being quite willing to kill people in the service of some vision. This is not an unreasonable fear; one thing which mitigates the cases like Nat Turner and Patrick Henry is that they were placing their own lives at risk as well, and that they were not (generally) willing to cavalierly accept casualties of people who did not either also believe that these things were worth more than life itself (and so, people who had also placed the goal above their own protection), or people who wished to stop them outright. But war has a way of making boundaries fuzzy: by the middle of World War II, the Allies were engaging in the firebombing of cities quite freely. Did they go wrong? If so, where?

Orderings that put one’s own life ahead of the lives of others make us uneasy because there’s a certain social contract of mutual aid. For anyone to openly say “No, I wouldn’t help you!” would cut them off from other people; but conversely, many people do ultimately protect themselves in a clinch, and we rarely think ill of them for it. (That is, we only consider people to have an actual duty to protect others under certain circumstances) Our unease over this order will, I suspect, prove critical to understanding computer ethics in the future.

And finally, orderings which put one’s own life ahead of greater goals are seen as somehow “cowardly,” as being unwilling to take a stand — and yet, like the previous group, these orderings are probably far more common than we’ll admit. Aside from the lives of fairly specific others, how many people have a goal that they would unhesitatingly give their life for? Some of us, certainly — but not all of us, and it’s probably best for the safety and sanity of the world that this is the case.

So I hope that I’ve illustrated that Munroe’s cartoon demonstrates much more than a few possible Killbot Hellscapes: it’s really a map of different aspects of our own moral sense. Each of us individually, as well as all of us as a society, move between these poles on a regular basis, and it’s worth being aware of ourselves when we do.

¹ See also the old joke about the Barbarian Pentathlon: “Alright, men. Today, we’re going to rape, murder, loot, pillage, and burn, in that order. I don’t want any more mistakes like last time.” “Sorry, boss.”

--

--

Yonatan Zunger

I built big chunks of the Internet at Google, Twitter, and elsewhere. Now I'm writing about useful things I've learned in the process.