Why We Should Adopt Driverless Cars That Kill People

Let’s save lives by adopting autonomous vehicles when they are far from perfect. Here’s how and when.

Editor
Lux Capital
6 min readSep 22, 2016

--

By Shahin Farshchi, PhD

How good is good enough? Pharmaceuticals undergo rigorous safety and efficacy trials that take years and hundreds of millions of dollars to complete. New aircraft designs take even longer and cost more money to certify, which is why we fly on planes whose designs date back to the seventies. Gadgets undergo a battery of safety tests. Software products are handed over to armies of test engineers to validate. Despite these lengthy, expensive, and onerous processes, approved drugs are recalled due to side-effects, cars are recalled, airplanes malfunction, Galaxy Note 7s catch on fire, and we have grown accustomed to the spinning wheel of death. We set a very high bar for products we consume. We expect, with high confidence, that they will function as expected and put us at zero risk of harm.

Regulators test new products until they feel certain that people will not be put at risk. Inventions from elevators to aspirin were subject to fierce scrutiny before they were broadly adopted. We can’t apply the same level of scrutiny to driverless cars. Too many lives will be needlessly lost in the time that it would take to statistically prove that driverless cars are truly better than humans drivers. Ironically, driverless cars will save lives whilst being the cause of many, potentially fatal, accidents; but less than those caused by human error. The robots don’t need to outrun the bear, they just need to be better than some people.

Driverless cars are special in that they learn as they encounter new circumstances. These learnings are immediately disseminated among the entire existing and unborn driverless car fleet. This is in stark contrast to humans, where one human’s learnings don’t automatically spread to others, and every human is born with programming that’s millions of years old. Artificial intelligence pioneer Sebastian Thrun postulates that, when subject to real-world conditions, AI can double its performance every 18 months, and expects driverless cars, in a short period of time, to be powerful enough to avoid accidents altogether. If pharmaceuticals, airplanes, and gadgets aren’t expected to improve over time, why should autonomous cars, which do improve over their lifetimes, be held to the same stringent standards before being released to the public?

Driverless cars promise to offer a safe, efficient alternative to human drivers. Uber was the first to launch a service in Pittsburgh to give the general public a flavor of the driverless experience. Marketplace’s Erika Beras recently took a ride in one of these vehicles, which by law had a driver in the driver’s seat. She noticed that the driver had to manually override the vehicle five times in the span of the first 10 minutes of the trip; therefore, the AI (artificial intelligence) piloting the vehicle has a ways to go before becoming better than human drivers.

Human drivers, however, are pretty terrible. The National Highway Traffic and Safety Administration counted over 2.3 million injuries and 32,000 automotive-related deaths in the U.S. in 2014. By being only marginally better than human drivers, driverless cars will save scores of lives. By the same token, they will be the cause of many injuries and deaths. This poses an important question to regulators and the public they serve: should society be put at risk by driverless cars knowing that they will be at less risk than with human drivers?

Although roadway deaths and injuries have been on a steady decline, buckling up alongside a human driver continues to be incredibly dangerous. In fact, the number of deaths on public roads is in the ballpark of the military death rate. ​If driverless cars were even slightly better than humans, they would still be killing scores of people. At what point should regulators allow, and the general population accept driverless cars in the interest of saving life and limb, knowing that driverless cars may continue to be the cause of many deaths and injuries?

If the goal is to save lives and prevent injuries, then it is obvious that driverless cars ought to be adopted when they outperform their human counterparts. Safety is traditionally measured by the number of incidents per million vehicle miles traveled. In lieu of driverless cars operating on the road, researchers are virtually racking up miles using video games to train and measure the performance of AIs in hopes of covering most real-world conditions. Unfortunately it is debatable as to whether video games truly capture all possible scenarios. Unless we find a way to quickly train algorithms and reach some consensus that they have experienced the vast majority of possible edge cases, many lives will needlessly be lost during the time that it would take to reliably verify incidents per millions of miles traveled in controlled test settings.

This recent RAND study measures the number of miles autonomous vehicles would have to be driven to compare them against human performance benchmarks. The study shows that basic statistical evidence of driverless car improvement over human drivers will require hundreds of vehicles to be driven for many years.

I propose a simple approach to determine when driverless cars are as safe as their manned counterparts. As a starting point, let’s determine the possibility of a human driver making a mistake. NHTSA uncovered 6.1 million car crashes in 2014; to a first approximation, let’s assume that each crash was caused by a single driver/vehicle combination But when should blame be attributed to the driver? Modern vehicle engineering and tires have made it extremely unlikely for a crash to result from vehicle defect; therefore, to a first approximation, let’s assume that all the crashes are caused by a driver making a mistake. Furthermore, a U.S. Department of Transportation survey states that Americans take 1.1 billion trips per day, which amounts to around 400 billion trips per year. If a crash can be regarded as a trip that terminated in a crash, as opposed to arriving safely at the desired destination, then the likelihood of a human driver crashing is 6.1 million divided by 400 billion, which amounts to approximately 1 in 65,000.

The question of whether driverless cars are as safe as humans is a statistical problem. Does the training data collected cover 99.99999% of possible circumstances the vehicle can encounter? If yes, then I’d prefer to hop in a driverless car over a human-piloted one. No longer would we apply the brute-force method of driving the vehicles for millions, or billions of miles over many years to test their reliability. Instead, autonomous vehicle developers can verify that their AI has encountered all possible events within 4.33 standard deviations of the most likely driving conditions, which corresponds to the 1 in 65,000 likelihood of a human driver getting into an accident.

To determine whether driverless cars are indeed safer than their human-piloted counterparts, we first need to figure out the likelihood of an AI crashing. To a first approximation, let’s assume that a driverless car is perfect at navigating scenarios it has previously handled safely. It is the circumstances it has NOT encountered that could result in a crash. Although it is expected that an AI will be able to “learn” from previous encounters and react safely to slightly different circumstances, let’s assume that any new circumstance the fleet of AIs haven’t encountered will turn into a crash. Assuming the distribution of all possible circumstances follow a normal, Gaussian distribution, then to be as safe as a human driver, the AI needs to have encountered 99.99999% of all possible circumstances. Mathematically speaking, the AI has to cover all possible scenarios to 4.33 standard deviations outside the most likely driving conditions. If semiconductor CAD tools can run statistical analysis on the distributions of many hundreds of independent variables, I expect engineers to build tools that could generate and subject AI to this breadth of possible scenarios.

I could be off by a few orders of magnitude — or perhaps even outright wrong in my assumptions. The point I am trying to make is that the biggest obstacle to driverless cars is readiness to adopt. I encourage our greatest minds to build the tools that will computationally create almost all possible scenarios a driverless vehicle can encounter. I challenge our state governments follow the White House’s policy guidelines to pave the way for driverless cars. The brute-force method of subjecting driverless cars to billions of miles of tests will only delay their benefits and result the needless loss of many thousands of lives.

Shahin is a partner at Lux Capital. Based in Silicon Valley, he invests in space, robotic, AI, transportation, VR and brain-tech companies; follow him on Twitter.

--

--