Getting Past the Last 10%

Perceptive Automata
Perceptive Automata
3 min readDec 12, 2019

In the early 1980s, the Mercedes-Benz vision-guided robotic van gave us one of the first looks at what we now call a self-driving car. It was far from perfect, but it did achieve speeds of 39 miles per hour on streets without traffic or pedestrians. Two decades later, the DARPA Grand Challenge set into motion the concept that autonomous vehicles could one day become reality.

From there, the auto industry’s fascination with autonomous vehicle development snowballed into what we now see as a race to successfully deploy the first fully autonomous vehicle on public roads.

Automakers and software developers alike have been hard at work for the past few years — building, testing, and iterating on technology that will make our commutes more efficient, our roads safer, and our rides smoother.

We’ve come a long way to that end, but we’re certainly not there yet — and here’s why:

Until recently, the autonomous vehicle industry was hyper-focused on one thing: make it possible for a car to move without a human driver behind the wheel. Sounds simple, but of course we know it isn’t. The simple fact that a car with no human driver can obey speed limits, identify stationary objects in the road, and even stop at a red light — is an incredible feat. Autonomous vehicles work — and some might even say we’re 90% of the way there. But, we’ve only scratched the surface of the most complex piece: how these vehicles work with people and within our current infrastructure.

Now, we’ve reached the last 10% of development — where we refine the technology and bring it to the public at scale. If the industry can get this piece right, it will have achieved its vision of changing the mobility paradigm forever. However, this last piece is the hardest piece to solve. It’s what stands in the way of the self-driving car success story and it’s precisely why we’ve seen developers and automakers changing their arrival timelines over the past several months.

The challenge we’re now faced with goes well beyond the straightforward engineering problem — and into the human element. Moravec’s paradox observes that complex tasks for humans are actually quite easy to teach an AI. On the other hand, low-level sensorimotor skills that come easily to humans, require enormous computational resources for an AI. Social reasoning in complex driving scenarios is a perfect example of this.

Human drivers are well-equipped to predict the behavior of other humans. We have an innate ability to understand social cues, which guides us when we’re driving and helps to anticipate what another driver or pedestrian will do at any given moment. Teaching autonomous vehicles to interpret and predict human behavior, on the other hand, is an extremely difficult and highly complex task.

It requires more than a robust sensor system and redundant software. It can only be solved with an AI trained to solve computer vision problems utilizing behavioral science techniques. This is the hard problem we work on every day and look forward to a world where computers and humans can interact seamlessly.

--

--