What is an automated driving corner case?

RightHook, Inc.
5 min readJul 24, 2019

--

When I was 20 years old, I taught my then-girlfriend how to drive. Like me, she had two arms, two legs, decent vision (after correction, the DMV notes), a good ear, and a human brain. I had a decent understanding of her spatial reasoning skills, her tendency towards risk-taking, and how she would respond to coaching (not well). All of this meant that as I watched from the passenger seat, I could understand what she was doing, guess why she was doing it, pontificate on whether it was good driving practice, and give her actionable feedback.

Now consider the differences between teaching someone to drive and programming a computer.

At least once a day while coding, I ask, “Now why did THAT happen?” Of course, the computer won’t answer. The computer can’t explain what it saw because it doesn’t see. It can’t explain what it was thinking because it doesn’t think. Explaining to it what the output should’ve been or how it should’ve arrived there is futile. It isn’t until I go in and change the instruction set that it’s executing that its behavior will change — hopefully for the better.

The uncomfortable truth is that automated vehicle software is the same kind of software we’ve always written, with a few machine learning models sprinkled in where they perform better at narrowly-scoped tasks. The rest of the code is the operating system; drivers communicating with specialized hardware; distributed processes turning data into other data, then into other data, then into decisions; monitors ready to sound an alarm if one of these processes appears to hang. And that’s just the sense-plan-act, single-robot control that we’re used to. At production scale, add map servers, teleoperation hooks, V2X communication, and (oh yeah!) user experience. All of it software: some written by the open source community, some by suppliers, some by integrators. Mostly by hand.

While virtually everyone in the automated driving industry acknowledges that automated drivers will not be perfect, many shift the focus from the problems of developing genuinely safe software to the limitations of today’s machine learning technology in comparison to the human brain.

However, the differences between human and computer brains lead us to the crux of this post: bugs in an automated driver do not necessarily correspond to events that human drivers consider exceptional. Therefore, a corner case for an autonomous driving system is any set of circumstances beyond the design and implementation of its hardware and software, whether those circumstances are created by the external world or by the infamous fat finger.

Consider an example from my own experience. At RightHook, we are developing a multi-agent, reactive traffic simulation, which is like writing an automated vehicle path-planner that runs on thousands of agents at once. When we first prototyped stop sign logic, our first test — coming up to and stopping for a stop sign — worked well. Then we set up a scenario where a pedestrian crossed mid-block in front of the car, about 12 meters before the stop sign. Our simulated vehicle stopped nicely for the pedestrian, then opened the throttle and went through the stop sign like it wasn’t even there. Now, the great thing about having a deterministic autonomous vehicle simulation is the ability to run the same scenario over and over again and get the same result. By stepping through our code, we found that in stopping for the pedestrian, the vehicle considered itself to have already stopped for the stop sign. We rewrote the buggy code and added a test to ensure the bug could never become un-fixed.

The above example is instructive because it’s an error that a human driver would never make. However, because we’re implementing driving logic as computer code, we were open to making this error. It’s also instructive because it was only by testing with a seemingly unrelated scenario — a pedestrian crossing mid-block — that we found a bug.

A few facts about bugs. Bugs are not necessarily easy to find, especially in a system that is meant to run in real-time. The (non-simulated) world does not stop when you hit a breakpoint. Next, bugs are not necessarily exotic. There are plenty of mistakes that developers make over and over again with sometimes-innocuous and sometimes-deadly results. Off-by-one errors, angle wrap, unit conversions, misunderstanding libraries and APIs are all known pitfalls, but are not necessarily handled perfectly by professionals. Finally, bugs aren’t necessarily the same in one self-driving stack as in another. Even when using some of the same algorithms, it’s extremely unlikely that two stacks will have the same implementation.

What then is the “long tail” of autonomous driving? Is it really kids in Halloween costumes? Kangaroos? Advertisements on the sides of buses? Or, is it more likely the long tail that challenges every software project? An endless queue of banal but difficult bugs that must be diagnosed, fixed, and prevented from ever happening again. The self-driving industry has had one fatality. It was not an exotic scenario, but it sure does sound like something wasn’t working right.

Finally, what is the solution? How do we eliminate all the bugs so we can focus on the last few percentage points of reliability?

First, we must harden the boring stuff — especially maps. At RightHook we have had the opportunity to work on some very large maps, and they will always expose poor assumptions you’ve made about navigating the road network with other road users. Other companies and the public should have this same opportunity, which would simultaneously raise the level of map quality assurance, give path planning developers more interesting test cases, and speed convergence to a mapping standard.

Next, we need simulation that can help developers diagnose, fix, and prevent bugs. Simulation allows different parts of the system to be deterministically put in the loop, from a single path-planning or sensor fusion subsystem to the entire system running on production hardware. Developers are then able to isolate a repeatable error and fix it for good.

Some thought leaders question the value of simulation, saying that it’s unable to model truly rare and exceptional events. But again, exceptional for whom? We’ve all seen computer programs crash in what we consider normal circumstances. When automated drivers can cover entire operational domains in what humans consider banal scenarios, then we can talk about simulation being too easy. But for almost all automated driving companies, that time is several years away. In the meantime, simulation is virtually selling itself.

--

--