Killing the Runaway Trolley Problem

Jim Burrows
Personified Systems
12 min readSep 20, 2016
Image based on works by Sirrob01 [CC0] & Universal Pictures [Public domain], via Wikimedia Commons

The nearly half-century-old “Trolley Problem” is a runaway success, one that itself needs to be brought to a halt.

The advent of “smart” and autonomous systems is becoming so important a feature of contemporary life that issues of “Machine Ethics” are gaining a rapidly growing hold on our attention. Along with Isaac Asimov’s fictional Three Laws of Robotics, the “Trolley Problem” as applied to “self-driving” cars is getting a lion’s share of that attention. Sadly, while the questions that the problem and its many variants raise are quite compelling, they are, I would argue, also the wrong questions, perhaps even dangerously so.

Origins

To back up a bit, the original Trolley Problem was one of several ethical dilemmas that Philippa Foot put forth in a 1967 essay in the Oxford Review, and republished, slightly edited, in the 1977 collection, “Virtues and Vices and Other Essays in Moral Philosophy”. Her purpose was to illustrate that there were more issues than simple quantitative utilitarian trade-offs in real world ethical judgements. Here, slightly abridged, is how she framed it:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another … the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both the exchange is supposed to be one man’s life for the lives of five. The question is why we should say, without hesitation, that the driver should steer for the less occupied track, while most of us would be appalled at the idea that the innocent man could be framed.

If there is no more to normative ethics than a quantitative utilitarian examination of the consequences of our decisions, then trading one life for five would seem to be an equally easy choice in both cases. Professor Foot, a proponent of Virtue Ethics, was making the point that an unjust trade-off is not acceptable. A just magistrate would not frame an innocent man. Many who have come after her have recast the facts in various ways in order to shed light on other aspects of normative ethics. There are variations designed to illustrate the roles of issues such as the distinction between acting and not acting, killing by intent or as a side effect, the moral or social standing of the victims, and so on. Some of them are diagrammed here:

Variants on “The Trolley Problem” from Wikipedia.

The Moral Machine project

Most recently, the problem has served as a template for an MIT project, the “Moral Machine”. In it, members of the general Web public are asked to make a baker’s dozen choices on behalf of a self-driving car with failed brakes.

The page describes itself this way:

Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

The individual scenarios look like this:

Typical ethical dilemma from the “Moral Machine”.

There are, it seems to me, a number of problems with this project, at least as it is presented on its web page. It asks the wrong ethical questions and the wrong technological questions, and it asks them in the wrong way. Finally, and probably worst of all, it gives the public a very skewed and quite probably harmful set of impressions.

Ethically, it is focusing on the question of who autonomous cars should kill, which is very far from the first ethical question that we need to focus on. Moreover, it does so entirely from a Consequentialist perspective, and a pretty narrow version of that perspective: a quantified value of human-life utilitarianism. The role of virtue, deontological principles, or hybrid normative systems such as prima facie duty, are given extremely short shrift.

Technically, it misses all the alternatives to hitting people or barriers, and the differences between head-on and oblique collisions, the difference in risk to passengers vs that to pedestrians, etc., even just sounding the horn. On the other hand, it presupposes that an autonomous system could ever identify a homeless man, a criminal or an executive. While it is true that it presents a highly simplified scenario, the choice of details to omit or assume is important.

And as an experiment or a survey, the design precludes significant results. There are at least 9 dimensions that the actors in the scenarios differ along, but each person surveyed chooses among only 13 pairs. Since no demographic data is collected, the subjects cannot be readily grouped, and different populations cannot be identified in order to determine which factors were salient in a given decision.

All these faults aside, what is worst about this project is that, by casting the ethics of self-driving cars and machine ethics in general in this way, under the prestigious MIT banner, it tells the public that this is what machine ethics is about.

The fault in this, however, does not lie with MIT. Rather, it is to be found in our fascination with conundrums, puzzles, and edge cases. The Moral Machine project is merely one of a long list of variations on the Trolley Problem, each one trying to tease out some added nuance, each generally more contrived than the last. Like the Turing Test, which was designed as a way to make an inference about the interior state of a purported machine intelligence but has become a test of the mere ability to deceive, or the Three Laws of Robotics, originally chosen to provide flaws and contradictions from which plots could be derived, the Trolley Problem has taken on a life of its own, become a thing in itself.

If not this, what?

So, if the Trolley Problem asks the wrong questions, what should we be asking? What are the important ethical issues in the area of driverless cars specifically, and machine ethics in general? I have a few answers to those questions.

The most general answer is that as machines come to act more and more as people do, it will be important that they act as ethical people. From the perspective of virtue ethics, we should expect them to be loyal, candid, discrete, and so forth. Systems that are operating in the medical arena should behave according to the four principles of biomedical ethics (Originally defined by Beauchamp and Childress in their textbook, Principles of Biomedical Ethics): respect for autonomy, nonmaleficence, beneficence, and justice. How do we make them in these ways, and what virtues or principles should be adopted?

Coming back to self-driving cars, the first issues should be similar to those for human drivers: Who (or what) should be allowed to drive? Under what circumstances? A human who wants to drive has to pass multiple tests, both of technical competency and of knowledge. We have a minimum age for human drivers, because we assume that below a certain age they haven’t got the judgement required in order to drive responsibly. Drivers who have not proven themselves are only allowed to do so under the supervision of a skilled driver, and in most states there are more stringent requirements for being that supervisor than for merely driving. They probably need to be at least 21 years old, and may need to have a certain number of years of driving experience. What are the criteria for autonomous vehicles?

Tesla this year has given us a couple of examples of the sort of ethical questions that we should be asking. Let’s look at a few of them.

How law-abiding should they be?

The Tesla that was involved in a fatal accident in May was traveling at 9 MPH above the speed limit (74 in a 65 MPH zone). While speeding was almost certainly not responsible for the accident, there is still the broader question: should an autonomous car speed? If it knows the law, should it follow it automatically, or, alternatively, should it drive at the prevailing speed, even if that speed is above the speed limit? Driving more than 5 MPH below the prevailing speed can raise the risk markedly. This isn’t just an issue for driverless cars, but for human drivers as well, one that is gaining growing attention.

Where should they be allowed?

The road where the accident occurred last May is a four-lane divided highway with a 65 MPH speed limit, but it is not limited access. When the Tesla hit it, the semi-tractor trailer rig was completely blocking all eastbound lanes, something that could not normally happen on a limited access highway. Here’s a quick sketch of the site of the accident.

Site of the May 2016 fatal Tesla accident with mocked-up vehicles

An obvious question is, “Is a Tesla controlled by the version 7 autopilot competent to be allowed to drive at 65 (or 74) MPH on this road?” Casting it ethically, should a car that is only capable of braking, staying within its lane, or executing lane changes be trusted with the responsibilities of navigating in this situation?

Please note that cars, trucks and even pedestrians can cross its path from the side streets both to the north and the south, from the westbound lane (as the truck did), or from the gas station on the corner. This is a far more dangerous environment than a limited access highway that we usually associate with Tesla’s autopilot.

On the other hand, Google’s self-driving cars are being tested and trained on much smaller neighborhood streets traveling at 25 MPH or less. Again, due to the lower speeds, that is probably a much safer environment than the accident site. If we did want to establish ethical or legal rules governing where self-driving cars are permitted, how should we determine what those are? The limitations that you would place on a Tesla are very different from those that you would place on a Google car. Should we identify several classes of roads, and then give various models driver’s exams to qualify for each? How do we deal with the constant improvements in each model’s capabilities?

Qualified supervising drivers

Current autonomous driving systems are very far from fully competent to drive in all circumstances. Human drivers who are not mature enough, not experienced enough or not skilled enough to be fully trusted to drive, must be supervised, and the Tesla is no different. As a result, Tesla has two levels of monitoring of how much the supervising driver is paying attention. Initially, there are visible warnings that appear on the system’s dashboard, reminding drivers to keep their hands on the wheel. If the driver ignores a few of those, the car issues an audible warning.

“Car will not allow reengagement of Autosteer until parked if user ignores repeated warnings.” — Tesla 8.0 release notes

In the most recently announced software upgrade, Tesla has added a new policy. If the driver has had to be audibly warned three times, autopilot will disable itself and will not reengage until the car has been stopped and put in park. This is an excellent example of an ethics-based decision, testing that the car is being supervised and refusing to drive unsupervised. There are, of course, any number of related questions regarding the qualifications and attention of the supervising driver. Tesla seem to be well aware of these. Elon Musk, for instance has been quoted as saying, “It’s not the neophytes, it’s the experts [who get into trouble]. They get very comfortable with it and they repeatedly ignore the warnings.”

More broadly, there is the question of whether cars that need supervision should be allowed to drive at all. Both Ford and Google have taken the stand that only cars that are fully capable of self-driving without supervision should be sold to the public. Elon Musk has taken a different approach. In a recent press conference he is reported as saying, “I think it’s quite unequivocal that Autopilot improves safety”, and that therefore: “I think it would be morally wrong to withhold functionality that improves safety if it’s just to avoid criticism or to not be involved in lawsuits.” This is precisely the sort of ethical question that we should be discussing.

Given that human drivers as well as artificial systems are imperfect, and that each will result in accidents, the question of trade-offs does come up, but it is not the “who should the car hit?” trade-off of the Trolley Problem, but questions of probability, human autonomy and responsibility. By both taking over the tasks that it can do more safely than the human driver and monitoring the driver’s attention (not only while they are supervising but perhaps even in manual mode), the autonomous car can become a responsible collaborator in a human/machine partnership.

Where do we drive?

There is another ethical issue raised not only by autonomous cars such as Tesla’s but by all cars and drivers equipped with automated navigation systems. It is one that has come up especially with regards to the Waze application. Waze is particularly good at helping its users avoid traffic jams and find short cuts. As its use spreads, though, some quiet residential neighborhoods have seen major increases in traffic. This can not only add to noise but increase the risks to children playing in the neighborhood, especially if the new through traffic is not only heavier but moving faster.

Such neighborhoods can be especially difficult for autonomous systems to understand, as the number of places that people and cars enter the street from is much higher, the number of children, pets and other small obstacles is greatly increased, and so on. The ethics of diverting commuter and other long distance traffic through more secluded and less trafficked neighborhoods is going to be quite complex, and far more immediate than the questions raised in the Trolley Problem.

As we can see, the advent of autonomous vehicles, AI, and systems that interact with us in ways that are increasingly human, brings with it a rich variety of ethical concerns. As these systems control more and more aspects of our lives, and are trusted with more potential harmful tasks, it is becoming crucial that we discuss these issues, and answer the hard ethical questions about what is acceptable behavior on their part. The Trolley Problem, threatens to distract us from these important endeavors.

Summary

In summary, there are pressing questions about the ethics of autonomous systems and how they integrate into society. Cast in the language of Virtue ethics, which concerns the character of the actor, these questions amount to, “What sort of participants in our society do we want machines to be?” How can they best collaborate with us? Best serve us? Should we control them strictly through laws, or is there an equivalent of professional ethics that applies to them, either generally or in specific contexts? We’ve seen a number of these illustrated. None of them is “Which person should the robot kill?”

Recommendations

There are a number of sources that I can recommend for people who are interested in this topic. The first doesn’t specifically concern itself with Machine Ethics but rather the mechanisms of trust within human society as a whole: Bruce Schneier’s Liars and Outliers. While Bruce is a technologist and security expert, his approach in this book is focused far more on the psychological, sociological and political mechanisms of Trust. Reading it, keeping in mind how automated systems are acting more and more like persons, and being integrated into society, helps us to understand both society and Machine Ethics better.

Another excellent source is Robot Ethics, by Patrick Lin, Keith Abney and George A. Bekey, and published by the MIT Press. It takes a broad view of the entire topic of Machine Ethics. It starts out by pointing out that there are three possible meanings of the term “robot ethics”: the professional ethics of roboticists, the moral code programmed into the robots themselves, and eventually, some day, the self-conscious ability to do ethical reasoning by robots. AI has advanced somewhat since the book was put together, but Machine Ethics has not done so as much.

Individual researchers in the practical field of Machine Ethics include Michael and Susan Anderson (c.f. Ensuring Ethical Behavior from Autonomous Systems), Alan Winfield and Selmer Bringsjord. I cover each of them and other work in the field in my informal sabbatical report, Personified Systems, from which this publication takes its name.

--

--

Jim Burrows
Personified Systems

On the ‘net (the ARPAnet) in ’74. 4 decades career doing hi-tech things I never did before. Researched Machine Ethics. Retired to create novels and comic books.