Self-Driving Cars Will Never Be 100% Safe, Because We Don’t Understand Math
You may be familiar with the claim that a piece of paper could reach the moon if it were folded 42 times. If you’re not so familiar, here’s a poorly-animated video to explain it — “it” being an analogy for exponential growth.
Having analyzed the steaming pile of comments on this video, I can tell you there are four types of people watching it:
- People who do not understand the concept, and instead lean on their life experience to doubt the concept’s validity. This is the largest group.
- People who do understand the concept, but don’t understand enough math to convincingly answer the first group’s barrage of ignorant doubts. This is the second-largest group, and I’m in it.
- People who understand the concept and enough of the relevant math to convincingly answer anyone’s question. These people are satisfied with the validity of the concept as a hypothetical. It’s the third-largest group.
- People who understand both the concept and the math, and lean on that experience to doubt the usefulness of the hypothetical as it overlooks complex variables that could be avoided with a less-flawed analogy. This is the smallest group.
Did you spot the two significant problems there?
First is the problem of mass ignorance causing friction. The top two groups make up a majority, yet they’re arguing with each other over ideas none of them understand. Group 2 can’t answer Group 1’s uninformed questions, so Group 1 maintains their (incorrect) point of view, which holds both groups up from making any progress with the information.
Second is the problem of ignorance masquerading as intelligence. Both Group 1 and Group 4 cast doubt on the claim, and both draw from their personal experience to do so. Therefore, it would be reasonable for someone in Group 1 to hear statements from Group 4 and imagine themselves to be in the latter camp, because the stance appears to be the same. This is how society ends up with idiots regurgitating arguments they’ve heard from smart people, making it difficult for the rest of us to know who’s smart and who’s an idiot stealing a smart person’s words.
The point is, most of us are the idiots. Let’s admit that up front, because while it is tempting to pretend we understand things like cars and traffic and crashes and economics, we really have no business crafting or clutching such opinions in light of a shift as revolutionary as autonomous cars.
We’ve heard a few politicians and automotive execs say that driverless cars need to be “perfect” before they’re safe enough to put people in them — that “99% isn’t good enough.” It’s likely we will hear more of such rhetoric in the near future, as special interest groups lobby against the technology.
Trouble is, “perfect” is impossible.
To understand why a “perfect” self-driving car is a preposterous demand, you have to understand what it means to be confronted with a problem whose degrees of difficulty increase exponentially… like folding paper.
The dude on the left here? He’s working against a linear increase in difficulty. You do this every day when you walk up a flight of stairs. We agree that it’s easier to walk one stair than it is ten, yes? But it’s merely more of the same problem you’ve already proven capable of solving. If you’re 99% of the way up the staircase, you’ll be able to finish by sheer will or a burst of adrenaline alone. Being 99% of the way through a linear problem means you’re almost done.
The dude on the right doesn’t have that luxury. His climb to the top becomes exponentially harder.
At 10%, he’d employ the same tactics you or I would. He’d run at that obstacle as hard as he could, no special training or mathematical equations or assisting contraptions necessary.
At 50%, Right Dude would realize his plan isn’t going to work. Running up the hill only seems to solve half of the problem, and clearly becomes a fruitless method for the remaining 50%. A few things become obvious now:
- He didn’t properly think the problem through
- He can’t rely on traditional solutions and may need more resources
- He already wasted time and resources on the original method
Now pay attention y’all: let’s say we’ve reached 99%. Right Dude and a team of 30 engineers received $2 million in funding to work on the problem of getting a human body into a near-upside-down state on this hill. Don’t ask how they did it… the thing took months to solve and it involves a flux capacitor implant which makes the dude’s blood pressure unstable when he sleeps. Also, they replaced his fingernails with adamantium. But hey, we did it. 99%! We’re almost done!
No, we are not almost done — put away the champagne. That’s not how exponential difficulty works.
99% here means, “I have solved 99% of the known scenarios in this problem.” That remaining 1%? It could be fifty times harder to solve than the problem of getting to 99%. It could be ten-thousand times harder. It might take humanity hundreds of years before anyone can even begin to imagine a practical solution. 99% only tells you what you’ve achieved so far. It doesn’t in any way promise that you’re almost finished.
In the case of autonomous vehicles, we’re talking about solving accidents, navigational missteps, malfunctions… collectively, an infinite number of scenarios. What’s 99% of infinity? Engineers in these positions improve their achievement rates from 99% to 99.9%, to 99.99%, to 99.999%… there is no 100%. There is always something unsolved, and often, you won’t know it’s a problem until you’ve solved other problems. That’s how you end up talking about 99% of something that has no denominator.
And guess whose job it is to give up and say, “ok, 99.999% is good enough?”
You guys! The voters. The taxpayers. The consumers. The members of society who are supposed to be working towards a better tomorrow.
Every decimal place we move through exponential effort means exponential investment in turn: if an autonomous vehicle has trouble with the problem of a two-foot-tall kid darting out from behind a parked car at night with tenths of a second to react, should we try to solve it? The answer is maybe. Maybe, if that’s a scenario we deal with often, and if the losses we incur from failing in that scenario are worth mitigating.
But — if it barely ever happens, and if solving it means dumping billions of dollars and years of research into the problem… there just might be a better use for those resources, and more value in launching the technology despite its failings. Because of course, every day you keep autonomous cars stuck in the lab is a day we’re losing 85 lives to conventional car accidents, and spending $2 billion to crash and repair our transportation infrastructure. That’s in the U.S. alone. At some point, the diminishing returns of trying to solve edge cases becomes wasteful and, ironically, inhumane.
What does it take to ensure a little kid doesn’t get hit by a car that has less than 0.2 seconds to react? How about 0.1 second, which is the limit of human reaction time? At some point, we’re spending significant real-world resources — that result in real-world waste — to solve a problem that exists more in our heads than in reality. And, even if it can be solved, the weight (both literal and figurative) of that solution may handicap the overall efficiency of the vehicle, as adding features and precautions typically do. Multiply that inefficiency by trillions of miles driven annually, and you’ll see your solution has created its own problem.
Virtually every accident on the road is caused by human negligence. Our hubris in allowing it to continue so that we might argue about solving a problem that won’t exist in the future is astounding. But of course, we continue to do it because we don’t understand math — especially when it requires that we get out of our own heads.
Self-driving cars are getting very close. Close to perfect? No. Close to good enough. Don’t be that idiot pointing to the problems yet to be solved unless you understand the repercussions of your demands in a world that will never be perfect. It’s a lot less perfect right now.