The View from the Front Seat of the Google Self-Driving Car, Chapter 4

Chris Urmson
5 min readJan 12, 2016

Chapter 1 and Chapter 2 and Chapter 3

Deadly car crashes surged in the first half of 2015 — by 14% nationwide and 20% in California — with experts projecting that deaths may have topped 40,000 for the year. It’s the equivalent of a 737 full of passengers falling out of the sky five days a week, all year long, yet we seem to accept this as the price of our mobility.

Self-driving cars have the potential to reduce those numbers, because they eliminate the driver inattention and error that leads to thousands of collisions, injuries, and deaths — in fact, 94% of crashes are caused by human error. This is why many people are excited about autonomous vehicles: the question my team and I get asked the most these days is, “When will they be ready?” But before that, there’s an important question we have to answer together as a society: “How safe do they have to be before we decide they’re ready?”

The obvious comparison is with human drivers, but that’s easier said than proved. It’s really difficult to make meaningful comparisons between the performance of autonomous vehicles and human drivers’ performance. The performance of self-driving cars is highly measurable, while there isn’t robust data about how safe (or not) human drivers are.

The challenge of measuring humans’ (un)safety

Human drivers are frequently unsafe in ways that are really difficult to measure. Although there are good statistics on the roles that driver distraction, alcohol impairment, and speeding play in actual collisions, it’s hard to measure the general stress and worry that this behavior causes for drivers nearby. Even an experienced and safety-conscious driver can vary in their state of alertness, affected by a tough day at work or kids in the backseat. This is why car insurance companies are experimenting with devices or apps that show how often a driver speeds or how hard he or she tends to brake, so they can get a better understanding of their safety-related habits.

Crash rates, which society generally looks to as an indication of driver safety, aren’t as reliable a measuring stick as they might seem at first glance. The problem is that most crashes aren’t ever reported to police or to insurance companies. This is especially true of the most common types of collisions: the minor fender benders that happen all the time on city streets. And each state has different thresholds for determining when an accident is serious enough to count as a crash in their local statistics. Yet according to National Highway Traffic Safety Administration (NHTSA) data, unreported incidents account for 55% of all crashes. That’s why we commissioned research from the Virginia Tech Transportation Institute to establish a methodology that can be used to better compare crash rates of self-driving cars and human-driven cars.

Measuring the performance of our self-driving cars

As we develop our self-driving car, we’re constantly testing, analyzing and evaluating how our software performs in multiple ways. We do this on our test track, in the real world (more than 1.3 million miles to date), and in our simulator (more than 3 million miles a day). Ultimately, a self-driving car’s readiness for the public can’t be boiled down to a single number, but we can accumulate a portfolio of metrics for our system that are useful to watch over time. Let’s take a look at some examples from a report we recently submitted to the California DMV. (Full report here.)

One metric we’re watching closely as an important indicator of our progress is the rate of what we call “simulated contacts.” These are situations in which, when we replayed a real-world situation in our simulator, we determined that our vehicle would likely have made contact with another object if our test driver hadn’t taken over driving. There were 13 of these incidents in the DMV reporting period (though 2 involved traffic cones and 3 were caused by another driver’s reckless behavior). What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015. This trend looks good, and we expect the rate of these incidents to keep declining. (That said, the number of incidents like this won’t fall constantly; we may see it increase as we introduce the car to environments with greater complexity caused by factors like time of day, density of road environment, or weather.)

The report also includes a metric that’s a proxy for the overall stability of the autonomous driving system. (While we’re still adding new capabilities to the software, this isn’t one of our top priority metrics, but it’ll be important once we want to “finalize” versions of our software and load it into vehicles that the public could ride in.) There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased with this direction and we’ll focus more on this in the future.

We have many other metrics and methodologies that will be useful for establishing our safety record over time. On our test track, we run tests that are designed to give us extra practice with rare or wacky situations. And our powerful simulator generates thousands of virtual testing scenarios for us; it executes dozens of variations on situations we’ve encountered in the real world by adjusting parameters such as the position and speed of our vehicle and of other road users around us. This helps us test how our car would have performed under slightly different circumstances — valuable preparation for a public road environment in which fractions of seconds can be of critical importance.

Thanks to all this testing, we can develop measurable confidence in our abilities in various environments. This stands in contrast to the hazy variability we accept in experienced human drivers — never mind the 16-year-olds we send onto the streets to learn amidst the rest of us. Although we’re not quite ready to declare that we’re safer than average human drivers on public roads, we’re happy to be making steady progress toward the day we can start inviting members of the public to use our cars.

--

--