Courtesy of Marc van der Chijs on Flickr

The technical origins of “death algorithms” in self-driving cars

Badg
7 min readJan 20, 2016

Much like traffic, public attitude towards self-driving cars can be very stop-and-go. And it seems every few months, there’s a new discussion of their ethical implications — a conversation which is profoundly important, and yet inevitably heavily dismissed by the techno-optimist crowd. These discussions (and frankly, the articles that spark them) tend to be heavy on speculation and hearsay, but light on technical merit. So, while they aren’t on the news, I thought I might take a quick opportunity to clear up some misconceptions about why programming self-driving cars is a real-world example of the trolley problem.

Now, don’t get me wrong. I have profound hope for the promise of technology. Self-driving (“autonomous”) vehicles will absolutely change the way we move through the world. And with the value of US goods shipped by truck surprisingly comparable to the domestic GDP ($10trillion to $16trillion in 2012, respectively), they will also undoubtedly change the way we work in it. But blind faith in technology without examination of its implications is a recipe for disaster, and the time for inquiry is now, before it’s a big problem. And however bullish you are on the future of autonomous transport, it’s impossible to avoid the coexistence of self-driving and human-piloted cars. So, from the perspective of someone with some (but not a lot) of experience in autonomous systems, here is a very simplified explanation of the “Self-Driving Trolley Problem”.

A basic concept in programming is “looping”. Loops do exactly what they sound like they do: they execute some commands, and repeat. Being so fundamental and so versatile, loops are everywhere in code. And in real-time autonomous systems like vacuums, lawnmowers, and cars, they are by far the most popular way of “controlling” the robotic decision-making.[1] To be clear, clever systems also have the capacity to dynamically react to spontaneous “events”, but deep down this process is still facilitated by looping.

Loops can have various speeds, and do various things. For simplicity let’s say our self-driving car has three loops going on:

1. Planning loop (makes high-level decisions like "where to drive" and "stay in lane")
2. Detection loop (senses things, sees the road, etc)
3. Actuator loop (does the actual physical driving of the car)

Now for various reasons that fall under the umbrella of optimization, these loops run at different speeds. For clarity,[2] let’s say the respective speeds are:

1. 500 hz (once every .002 seconds)
2. 1000 hz (once every .001 seconds)
3. 250-750 hz (once every .0013 - .004 seconds), depending on speed and road conditions

These speeds might be a little fast for an autonomous lawnmower (that’s predominantly what I have experience in), but for something like a car, they seem like a reasonable fastest case scenario, since even at 160kmph/100mph, that corresponds to

1. 9 cm
2. 4.5cm
3. 18cm - 6cm

I’m not going to go into much detail, but let’s say here’s the scenario:

1. On highway to destination, in the leftmost (inner/fast) lane of a right-hand drive country
2. Suddenly observe two children running in front of car, 12.4m ahead
3. Driving as usual, 120kmph/75mph

Given videos like this, this is clearly within the realm of possibility.[3] Now, for a sense of scale, during the .37 seconds the car has to react (very short for a human!), the computer driving has

1. 185 decisions to make about how and where to drive
2. 370 observations to make about how its actions affect its environment
3. 222 brake, steering, horn, etc commands to issue (Simplifying and assuming a 600hz loop rate based on car speed)

Clearly, unlike a human, the car has some time to make a high-level decision here. And this is how that decision “looks” to the algorithm involved (this is called a decision tree):[4][5][6]

1. Do nothing (continue course). 98% chance of impact with child 1, 94% chance of impact with child 2. 90% survivability of driver. 6% survivability of child 1 (provided impact occurs), 12% survivability of child 2 (provided impact occurs).
2. Swerve left into median. 12% impact 1, 7% impact 2, 31% driver survivability, 84% and 89% children survivability.
3. Emergency full braking application. 80% impact 1, 76% impact 2, 83% chance of being rear-ended by following vehicle. 82% driver survivability, 40% C1, 45% C2.
4. Swerve right into traffic. 7% impact 1, 1% impact 2, 92% chance of impact with cars in adjacent lane. 24% driver survivability, 92% C1, 98% C2.

These are, by the definition of the program as created by the programmer(s), literally the only actions available to the car (ignoring things like errors). The problem, and the point of the moral debate, is that, sitting in her desk somewhere — or more likely a conference room — someone, likely a programmer with no training in ethics, must tell the car how to rank the 4 decisions in that scenario. That’s where we are forced to make a philosophical judgement call.

And it’s worth pointing out that you, as a human, are doing this every time you step behind the wheel. People swerve out of the way of deer, accidentally killing their passenger, and that’s the call they make. Even with clear skies and open roads, you’re still making judgments like oh-shit-someone-is-going-to-tbone-me-do-I-slam-on-the-brakes-or-the-gas. The difference is, you don’t have time to think about the decisions before the consequences are already upon you. Meanwhile, that autonomous car has literally hundreds of opportunities to do something about it — and the person who programmed it has all the time in the world to decide which one of those four options to choose.

[Footnotes:]

  1. It’s important to bear in mind these are real-time loops. Machine learning, artificial neural networks, etc, are frequently used in development of autonomous systems, and see heavy application in, for example, object recognition routines, but are used much less frequently for direct control. Not only are these processes often prohibitively computationally expensive, they have a tendency to be non-deterministic, which makes people very nervous when the physical thing they’re controlling can quite literally kill people. In other words, they run too slowly, given the limits of computer hardware, and their behavior can be hard to predict.
  2. To be clear, these loop speeds are 100% made-up. They are, however, an estimation of where I personally might start if I were building the Google car in my garage. Faster isn’t always better; in particular if you’re talking about something like a PID loop, running too quickly can make tuning very difficult, or can even potentially make your system unstable. Also worth noting: even with modern processors, you can absolutely be hardware-limited. The max scan speed of the LIDAR unit Google is using is 15hz. You can do a lot of neat things to compensate between frames, but it’s a bit of a gamble. For reference, the lawnmower I worked on, which had a hardcoded max speed of 2 m/s (which is pretty zippy for a mower!), ran its wheel control loop at ~100hz and steering at ~10hz. Our LIDAR was 5–10hz, and we had a separate vision loop running on a second onboard laptop computer, which was struggling to keep up.
  3. All signs indicate that self-driving cars will have far, far, far lower accident rates than human ones. I am by no means arguing this will be a frequent occurrence. But an absolute rule in scalable systems deployment (so, anything with lots of people using the system) is that anything that can go wrong, eventually will. It’s a pure numbers game: even if a scenario like this happens literally once-in-a-million-miles, it’ll still happen 8500 times per day in the US.
  4. Decision trees are pretty basic constructs and definitely not universally applied to autonomous systems. Smarter programming involves looking at all possible actuator actions (the “possibility space”) and maximizing some “fitness” equation for the “best” possible outcome. This gets very complicated very quickly, and isn’t appropriate for a short-ish article on the ethics of autonomous programming.
  5. If you’re skeptical about the maturity of domain-specific object-recognition, don’t be. Google is even detecting cyclist arm gestures at this point. Technology in this arena, especially in limited scopes like roadways, is progressing extremely, mind-blowingly rapidly. There are certainly challenges here, but the tech is already fairly solid.
  6. I have absolutely no basis to speculate on whether or not self-driving cars are calculating survivability of hypothetical impacts. However, the more time they spend on the road, the more they will compile statistics on accident survivability given sensor data, and these kind of lookup operations against pre-calculated tables are extremely fast. So while, as of January 2016, I actually suspect they aren’t doing anything this involved, I do think it’s likely to happen sometime in the future. I do not, however, expect real-time systems to be capable of dynamic predictive crash modeling. Those simulations can take days of supercomputer time; they aren’t about to show up in your Tesla.

--

--

Badg

Versatilist pursuing personal agency in a digital world. Building “programmable Dropbox” for #IoT at www.hypergolix.com w/ www.muterra.io.