Learning from Arizona

Sam Anthony
Perceptive Automata
5 min readNov 12, 2019

In March of 2018, the AV industry was shaken when an Uber self-driving vehicle fatally struck pedestrian Elaine Herzberg in Arizona. The crash was a tragedy and a wake-up call to the entire autonomous vehicle industry. Since that incident, Uber has transformed internal processes and restructured its entire safety posture, as evidenced by the recent formation of the independent self-driving safety and responsibility board (SARA) and the comprehensive Safety Report published one year ago. From all the evidence, they are now an industry leader in thinking clearly and deeply about safety.

There is still plenty to learn from the recently released NTSB (National Transportation and Safety Board) report about the Arizona incident. The issues in the report point to a challenge our industry should pay close attention to: the difficulties faced at the intersection of perceptual understanding, safety systems, false positives, and on-road testing.

The crux of the NTSB report is that multiple sensory systems in the autonomous vehicle detected the pedestrian (albeit not necessarily as a pedestrian), but the vehicle was configured to not engage emergency stops for pedestrians outside of crosswalks. This decision seems incomprehensible at first pass. In retrospect, it was not the correct decision. But it was a choice made in a context that is more complicated than is evident without some context for how autonomous vehicles work.

The fundamental issue is the problem of false positives leading to emergency stops. A false positive means that the vehicle is responding to something that it thinks will be in the path of the car, but which actually will not. An emergency stop is, well, slamming on the brakes (Uber defines it as as a deceleration greater than 6.5 meters per second squared (m/s2)). If there is a pedestrian who is intending to cross the road in front of the AV, even if they aren’t yet in the path of the car, it has to stop. If the car only detects that pedestrian a couple of seconds before you would reach them, it has to stop as quickly as it can. The good news is that autonomous cars can stop extremely quickly, much more quickly than a human can stop a car. The bad news is what happens if the AV was wrong about the pedestrian ending up in the path of the car — what happens when there was a false positive.

Emergency stops are dangerous. They are certainly better than the alternative, but they are called “emergencies” for a reason. Emergency stops are uncomfortable for passengers inside the car and can lead to serious injuries including broken collarbones and whiplash. Emergency stops are equally scary for people outside the car. The sound of screeching tires from a driver slamming on the brakes startles other road users and can cause them to behave unpredictably. Traveling at 43 mph, as the Uber vehicle was in the Arizona crash, hard braking can lead to the vehicle getting rear-ended, or cause a pile-up behind the vehicle. However, should an autonomous vehicle not stop when it needs to, the consequences can be deadly. For AV companies testing on public roads, the only acceptable behavior is to never fail to stop if you could have.

In some ways, the answer is simple: design autonomous vehicles with enough redundant safety systems and sufficiently conservative driving policies so that the vehicle never misses an emergency stop. But, if the vehicle’s driving policies are set to never miss an emergency stop, the number of false positives increases significantly. This means more braking for “no good reason”, more uncomfortable passengers, and higher safety risk for those around the vehicle. Too many false positives, and testing on public roads doesn’t work.

The more conservative your safety thresholds, the more false positives you have. Since you have to be way out in the direction of maximal safety, the only answer is to shift the safety/false positive tradeoff curve. You do that by understanding the world.

But the alternative (not testing on public roads) isn’t an option either. On-road testing is essential to improve the performance of these vehicles, gain consumer trust and reach sufficient performance to bring these vehicles to market safely. Without on-road testing, the industry will be brought to a halt. So, starting with the necessary safety threshold — zero missed emergency stops, we have to ask ourselves a fundamentally different question. How to have both safety-first testing and acceptable false positive levels? One answer is to put yourself in a situation where false positives are less damaging. If you only drive at low speeds, even a quick hard stop is a relatively low-stakes maneuver. Another approach, the one used by Uber in Arizona, is to have a safety driver who monitors the vehicle and ensures that it stops in an emergency. But that’s hard to get right. Humans are fallible, particularly when the job is to pay close attention to a system that almost never requires intervention. Even highly trained professionals like airline pilots working in redundant teams still fall victim to this. That’s what led to the crash of Air France flight 447 in 2009.

Fundamentally, the only way to reduce false positives is to understand the world better. First of all, you need to detect every pedestrian. That’s not enough, though. You need to have the best possible guess about what they’re going to do next. Slamming on the brakes every time you pass a pedestrian in the road is not a solution. But if you see somebody in the road and can make a judgment — do they really want to jaywalk? Or are they just standing in the road waiting for an Uber? — then you can modulate the vehicle’s behavior so that any required emergency maneuvers are less dangerous.

For human drivers, this is second nature. If we see someone that we think might want to cross, we slow down and see how the situation evolves. We don’t stop mid-traffic and remain stopped until that person hops in their Uber. Alternatively, If they seem like they really don’t want to cross, maybe you don’t have to slow down as much. We are naturally prepared for an emergency stop if needed. If someone is standing casually at a bus stop we assume they’re waiting for a bus. If they suddenly leap into the road? I’d be surprised, too. That’s a real emergency, and slamming on the brakes is the correct response.

The only way to get autonomous vehicles on the road safely is to have all redundant safety systems on. This is something the whole industry now understands. And the only way to effectively test with all systems active is to ensure that they have the fewest possible false positives. Autonomous vehicles need a sufficiently rich picture of the world — including what pedestrians and other road users are likely to do. In other words, these systems must truly understand what is likely to happen and activate only during times of real emergency. If we’re going to make progress with developing and deploying autonomous vehicles — and we are! — there’s no other way.

--

--

Sam Anthony
Perceptive Automata

CTO and co-founder of Perceptive Automata, providing human intuition for machines