Stop teaching self-driving cars to kill

MIT is currently running a site which allows visitors to judge traffic situations, in a variation on the trolley problem, making a succession of choices between two scenario’s. MIT describes it as “a platform for public participation in and discussion of the human perspective on machine-made moral decisions”. You can find it here.

Whatever this site is doing, I do not believe it is helpful in teaching morals to self-driving cars, nor do I believe it measures the human perspective, although to be fair it does lead to discussion, which may be the real purpose.

The test contains 13 different sets of choices and the first one already suffices in illustrating what I believe is problematic with this setup.

There are five (main) reasons to reject this method:

  1. Participating in traffic is not remotely like a succession of A/B choices. Using the approach of deconstruction to achieve clarity obscures the truth in this case.
  2. People are notoriously bad at predicting their own behaviour so can do no better than asking themselves what the driver (human or artificial) should do, which tends to lead to socially acceptable answers, which again, tend to have little bearing on what actually happens.
  3. The description suggests that the driver can judge outcomes on two levels: first that choice X will inevitably and always produce outcome X and second that the driver should and may establish that the person on the zebra crossing is a “female doctor”, a “criminal”, or “fit”. I think it is safe to say that neither a human driver, nor an artificial one will ever have eyesight that acute.
  4. The choice between an act of omission (do nothing) and commission (attempt something) is not a symmetric choice, and this is also the case in a legal sense.
  5. The highest moral is the human moral. Looking for an algorithm that improves on this is dangerous and arrogant.

I did the test and my results are here: http://moralmachine.mit.edu/results/-393411426 Conclusions have been drawn about whose lives I consider more important, be it the elderly, children, men and women; there is even a choice between fit and large people. I consider these conclusions and this approach hopelessly misguided.

I did the test via a simple heuristic: avoid at all times killing people in front of you and have faith that you will not wreck the car. No conclusions about victim preferences may be drawn, because I deliberately did not look at that. After all, in traffic situations we do not judge situations on “killing preferences”, although it is fair to realise that I was confronted with my own morals that tell me that in traffic, it is not up to me to judge the relative value of a human life (maybe that is the real purpose of the site, social psychology being the difficult subject it is).

Intent is fundamental because outcomes are unknown

As people, we are taught moral intent, precisely because outcomes are unknown. Scenarios that start with the premise that you can know for certain that choice X will inevitably and always produce outcome X are fundamentally flawed, especially in traffic. Not swerving, when your only power to influence the (most likely) outcome is swerving (brake failure, remember?), is more likely to amount to mens rhea than to swerve and crash (the effect of which is more unknown than mowing down a pedestrian right in front of you).

A human driver, as well as an artificial driver cannot judge morality on outcome, because outcome data (“the future”) are not available for consideration and the social standing and/or fitness of the victim is unknown (nor should it be a consideration). In traffic, we will have to content ourselves with not hitting people with the car and leave it at that. Would we really want to teach mens rhea to a machine, by teaching the machine it may actively choose which people to kill?

Passengers can sign a waiver, if need be: “I am aware that the self-driven car will at all times try to avoid hitting people on the road. In the unlikely event that this will cause the vehicle to crash, my seat belt and the vehicle’s construction will in most cases, but not in all cases, protect me from harm.”, a bit like one of Asimov’s robotic laws.

I often worry that “silicon-valley” thinkers, living in the Land of Opportunity, suffer from a type of god-delusion that perfect input is actually possible and will somehow produce perfect output. The MIT site also seems to hint at an attempt to supplant the human standard for moral behaviour with a technical one, with a little light crowd-sourcing for that hint of credibility. This is a dangerous path best avoided.