What self-driving cars can learn from humans

Sam Anthony
Perceptive Automata
4 min readDec 18, 2018
The DARPA Urban Grand Challenge showed that self-driving cars could follow the rules of the road. But before they can be widely deployed, they have much more to learn from humans.

I’ve written before about how human state of mind is likely the biggest remaining problem for self-driving cars. It’s also something that’s incredibly central to human driving in dense urban areas. If it’s so important for humans, though, why has it been left until now for self-driving cars? There are a few reasons, and they all boil down to one big assumption: we assume that the things that are easy for humans to understand are easy for machines to understand.

In other words, the more effortlessly and automatically humans can do something, the less likely we are to notice it. When we come up with rules for how to do something, we tend to think about the aspects that make it hard for us, as humans. That implicit assumption — that the critical features of driving are the ones that humans have the hardest time learning — got baked into the earliest tests of automated driving.

The modern history of the self-driving car starts with the DARPA Grand Challenge. The first Grand Challenge in 2004 offered $1 million to any vehicle that could complete a 150-mile off-road desert course. No car got more than seven miles past the start line. The next year, five cars completed the same track — a stunning advance that raised the bar for the next challenge.

In 2007, DARPA launched the Urban Grand Challenge. Instead of an isolated desert course, this test took place on an urban track — an abandoned California subdivision — and required the cars to share the road and interact with each other. The goal for this challenge was for the vehicle to pass a California driver’s test, according to DARPA. Six vehicles completed the test and the race towards self-driving cars was on. The members of those six teams comprise the core technical leadership of the self-driving industry today.

While DARPA’s goal for a car to pass a California driver’s test might seem like a high bar, it’s an extremely problematic way to evaluate a machine. A driver’s test is designed to test whether humans have learned the necessary skill set for operating a car on public roads. Drivers who passed the exam are tested on the rules of the road, like signals, signs, and speed limits. These elements are relatively easy for machines, who are excellent at obeying rules. The issue is that they don’t understand the nuances within and around the rules.

For example, when student drivers learn how to navigate around pedestrians, they focus on the driver’s handbook rules about right of way, yielding, and signaling — not the more esoteric questions about what makes a human. Unlike a machine, they already know what a human looks like. More specifically, they can tell through a human’s stance and eye contact, whether the pedestrian wants to cross the street. You wouldn’t ask a human driver “Can you tell if somebody wants to cross the street?” because knowing the answer to that question is part of what makes us human. You would, however, ask a self-driving car that question, which is why requiring automated vehicles to pass a California driver’s test is inherently insufficient.

So what would be a better benchmark for self-driving cars, if not a standardized driving test? While humans are tested on more machine-like capabilities — of following binary rules of the road — machines need to be tested on human-like abilities, like how pedestrian to driver eye contact means a pedestrian intends to cross the road and the driver should subsequently stop.

The behavioral science element of self-driving cars should not be ignored. Currently, the majority of AV car crashes are the result of a car’s inability to detect human behavior. Body language, eye movement, hesitation, awareness, and intent are just a few of the dozen behavioral elements that go into the unremarkable daily exchange between pedestrian and driver.

As humans, we quantify things in a binary way, but we understand them in a much richer fashion, which is the hardest element of programming machines to perform human actions. We need to think critically about the behaviors and mechanisms we take for granted so that we can develop safe, reliable, and technically-sound self-driving cars.

The Society of Automotive Engineers (SAE International) recently announced that it is beginning the process of developing industry-wide standards for these vehicles. In developing these standards, regulatory and industry bodies must not only consider the competencies involved in human driving but also those involved in normal human functioning.

To understand how to test things like social intelligence and theory of mind reasoning, policymakers must look beyond the automotive testing world and into behavioral science literature. Testing mechanisms must include real-world interactions, not just simplified simulations of actual environments because without real-world testing environments, we won’t have the proof that the software systems behind self-driving cars can effectively mimic the human ability to read social situations.

We are still trying to define what it means for a self-driving car to work properly. As we continue to establish a precise outline, we need to be aware that the current interpretation based on the required skills and responsibilities of a human driver only tells part of the story — and it might be the easy part.

The current legal, structural, and philosophical infrastructure that determines what makes a good driver is insufficient because all of it starts with the same assumption of human competence. To understand the whole story, we have to think more profoundly about what makes us human and what makes humans good drivers compared to machines. When self-driving cars can clear that bar, then they will reach their promise.

--

--

Sam Anthony
Perceptive Automata

CTO and co-founder of Perceptive Automata, providing human intuition for machines