Why We Should Adopt Driverless Cars That Kill People
Let’s save lives by adopting autonomous vehicles when they are far from perfect. Here’s how and when.
How good is good enough? Pharmaceuticals undergo rigorous safety and efficacy trials that take years and hundreds of millions of dollars to complete. New aircraft designs take even longer and cost more money to certify, which is why we fly on planes whose designs date back to the seventies. Gadgets undergo a battery of safety tests. Software products are handed over to armies of test engineers to validate. Despite these lengthy, expensive, and onerous processes, approved drugs are recalled due to side-effects, cars are recalled, airplanes malfunction, Galaxy Note 7s catch on fire, and we have grown accustomed to the spinning wheel of death. We set a very high bar for products we consume. We expect, with high confidence, that they will function as expected and put us at zero risk of harm.
Driverless cars are special in that they learn as they encounter new circumstances. These learnings are immediately disseminated among the entire existing and unborn driverless car fleet. This is in stark contrast to humans, where one human’s learnings don’t automatically spread to others, and every human is born with programming that’s millions of years old. Artificial intelligence pioneer Sebastian Thrun postulates that, when subject to real-world conditions, AI can double its performance every 18 months, and expects driverless cars, in a short period of time, to be powerful enough to avoid accidents altogether. If pharmaceuticals, airplanes, and gadgets aren’t expected to improve over time, why should autonomous cars, which do improve over their lifetimes, be held to the same stringent standards before being released to the public?
Driverless cars promise to offer a safe, efficient alternative to human drivers. Uber was the first to launch a service in Pittsburgh to give the general public a flavor of the driverless experience. Marketplace’s Erika Beras recently took a ride in one of these vehicles, which by law had a driver in the driver’s seat. She noticed that the driver had to manually override the vehicle five times in the span of the first 10 minutes of the trip; therefore, the AI (artificial intelligence) piloting the vehicle has a ways to go before becoming better than human drivers.
Human drivers, however, are pretty terrible. The National Highway Traffic and Safety Administration counted over 2.3 million injuries and 32,000 automotive-related deaths in the U.S. in 2014. By being only marginally better than human drivers, driverless cars will save scores of lives. By the same token, they will be the cause of many injuries and deaths. This poses an important question to regulators and the public they serve: should society be put at risk by driverless cars knowing that they will be at less risk than with human drivers?
If the goal is to save lives and prevent injuries, then it is obvious that driverless cars ought to be adopted when they outperform their human counterparts. Safety is traditionally measured by the number of incidents per million vehicle miles traveled. In lieu of driverless cars operating on the road, researchers are virtually racking up miles using video games to train and measure the performance of AIs in hopes of covering most real-world conditions. Unfortunately it is debatable as to whether video games truly capture all possible scenarios. Unless we find a way to quickly train algorithms and reach some consensus that they have experienced the vast majority of possible edge cases, many lives will needlessly be lost during the time that it would take to reliably verify incidents per millions of miles traveled in controlled test settings.
I propose a simple approach to determine when driverless cars are as safe as their manned counterparts. As a starting point, let’s determine the possibility of a human driver making a mistake. NHTSA uncovered 6.1 million car crashes in 2014; to a first approximation, let’s assume that each crash was caused by a single driver/vehicle combination But when should blame be attributed to the driver? Modern vehicle engineering and tires have made it extremely unlikely for a crash to result from vehicle defect; therefore, to a first approximation, let’s assume that all the crashes are caused by a driver making a mistake. Furthermore, a U.S. Department of Transportation survey states that Americans take 1.1 billion trips per day, which amounts to around 400 billion trips per year. If a crash can be regarded as a trip that terminated in a crash, as opposed to arriving safely at the desired destination, then the likelihood of a human driver crashing is 6.1 million divided by 400 billion, which amounts to approximately 1 in 65,000.
To determine whether driverless cars are indeed safer than their human-piloted counterparts, we first need to figure out the likelihood of an AI crashing. To a first approximation, let’s assume that a driverless car is perfect at navigating scenarios it has previously handled safely. It is the circumstances it has NOT encountered that could result in a crash. Although it is expected that an AI will be able to “learn” from previous encounters and react safely to slightly different circumstances, let’s assume that any new circumstance the fleet of AIs haven’t encountered will turn into a crash. Assuming the distribution of all possible circumstances follow a normal, Gaussian distribution, then to be as safe as a human driver, the AI needs to have encountered 99.99999% of all possible circumstances. Mathematically speaking, the AI has to cover all possible scenarios to 4.33 standard deviations outside the most likely driving conditions. If semiconductor CAD tools can run statistical analysis on the distributions of many hundreds of independent variables, I expect engineers to build tools that could generate and subject AI to this breadth of possible scenarios.
I could be off by a few orders of magnitude — or perhaps even outright wrong in my assumptions. The point I am trying to make is that the biggest obstacle to driverless cars is readiness to adopt. I encourage our greatest minds to build the tools that will computationally create almost all possible scenarios a driverless vehicle can encounter. I challenge our state governments follow the White House’s policy guidelines to pave the way for driverless cars. The brute-force method of subjecting driverless cars to billions of miles of tests will only delay their benefits and result the needless loss of many thousands of lives.
Shahin is a partner at Lux Capital. Based in Silicon Valley, he invests in space, robotic, AI, transportation, VR and brain-tech companies; follow him on Twitter.