Death Algorithms of Self-Driving Cars Revealed

What Silicon Valley Isn’t Ready to Tell You

Dan Ralls
Dan Ralls
Nov 12, 2015 · 4 min read

Every few months, something happens in the auto world that lights everyone atwitter about self-driving cars. The most recent is Tesla’s controversial beta autopilot that brings us one step closer to the future.

Along with the promise that autonomous cars usher in, they track through the door with them a much more grim issue: what happens when your car has to decide between the lives of you or others in a crash situation? I’m talking about the so-called death algorithms that self-driving cars will have programmed into them.

Despite the glossy futuristic veneer, this isn’t a new issue. In fact it’s come up numerous times recently with the trolley problem, a modern thought experiment in ethics — relevant essays here and here (the latter of which was nicely written by Tanay Jaipuria here on Medium). Here’s the problem: A runaway trolley is about to run over 5 people on the tracks, which would most assuredly kill those people. You are standing at a switch that, if pulled, would divert the trolley to a track where there’s only one person, most assuredly killing that person. Do you pull the switch and kill that one person to save the other five? Why are all of these people all over all of the tracks? Where is OSHA and/or security during all of this?

The trolley problem is a case study in ethics. Is actively having the hand in one person’s death to save five people a greater moral good? Do we need to examine the value of each of these lives first before a decision is made? These are the kinds of things that software engineers need to consider with self-driving cars. If two children suddenly run into into the street, is it the ethically right thing for your self-driving car to crash into a wall, killing you but saving the children? In the future, maybe the flash before your eyes when you die is simply your car scanning your brain to determine how good of a person you are.

It turns out — in a display of Silicon Valley brashness and disruption — this is not a chilling elephant in the room for the tech field. Rather, it’s an opportunity. An opportunity to pivot and not only breath life into driverless car’s death algorithm from a granular level, but to do it in a way that is fully aligned with the brand’s meticulously sculpted public image, reputation, and ideals.

And thanks to Wikileaks, here are the death algorithms of self-driving cars revealed, based on the companies known or rumored to be developing self-driving vehicles:


  • Prime Directive: Save customer at all costs to provide exceptional death avoidance experience that would result in no less than a five star rating, even if this means killing as many humans as possible in violation of all existing regulations, business practices, and known human ethics.
  • Final Program Communication:In order to not die in the next 2 seconds, press ‘Accept’ on your Uber app to consent to Death Avoidance Surge Pricing (522,234.75x).”


  • Prime Directive: Hey, bro, we’re just gonna do something here that shares the pain equally among everyone involved.
  • Final Program Communication: “In hindsight, this pink mustache thing was an absolutely terrible idea.” *Sad Fist Bump*


  • For Sidecar, this is less about ethics and more about metaphysics: Do we exist? Who are we?
  • Final Program Communication: Shrugs and swiftly dissolves into a haze of industry irrelevance and side view mirror bras.


  • Prime Directive: Immediately prior to impact, Elon Musk swoops in wearing an Iron Man suit, saving everyone involved, drops them off at the Tesla Accident Recovery center, and then disappears over the horizon, riding a Solar City-provided sunbeam into space.
  • Final Program Communication: “J.A.R.V.I.S., get these people vouchers for a nice relaxing spa day, a tour of the factory floor, and Qdoba Mexican Grill.”


  • Prime Directive: Don’t be evil! Wait, is that still a thing? No? Now it’s Do the right thing? Like the movie? What? Does this apply to Alphabet? Do I work for Alphabet or Goo-.
  • Final Program Communication: “I’m Feeling Lucky™”


  • Prime Directive: Preserve life of passengers if within warranty period. Regardless, immediately deploy well-produced short documentary demonstrating the intuitive, revolutionary, and seamless death experience that passengers can expect. Ensure passenger dies to the soothing sound of Jony Ive crooning about milled blocks of aluminium.
  • Final Program Communication: “This crash isn’t squarely covered under your AppleCare Plus policy, BUT as a one time exception, Apple is happy to preserve your life for an incident fee of $790,000.”


  • Prime Directive: [This is the exception. No death algorithm — no matter how maliciously written — could possibly be more unpleasant than the experience of riding in a cab, particularly in San Francisco. Like cigarette smoke? Broken credit card machines? Having your destination rejected? Having nobody show up after a call? Cabs are for you.]
  • Last Line of Thought: No, I was NOT nodding off- *squish*

I occasionally write things here on Medium. You can listen to my sometimes dark but generally funny and useful Ask a Lawyer podcast, Unwonk, here. You are also welcome to read my Ask a Lawyer column in Deadspin here . Follow me on Twitter.

Dan Ralls

Written by

Dan Ralls

Writer. Father. Sandwich eater. @deadspin. @unwonk podcast. I’m at