How to drive 10 billion miles in an autonomous vehicle

Google, Tesla, Zoox, and many more plan to drive billions of miles autonomously with simulation

Michael Dempsey
6 min readMay 31, 2017

There are two major bottlenecks for building an autonomous vehicle (AV) today: Generating supervised data and training/testing your model.

AV engineers train their models by feeding them with enough data to sufficiently react to multiple scenarios. These range from sunlight hitting sensors at different angles to obstacles flying in front of the car to strange external driver behavior, and much more.

The problem is it isn’t easy or safe to replicate many scenarios in real-world environments.

A report by RAND found that AVs would have to be driven hundreds of millions of miles and sometimes billions of miles to demonstrate acceptable reliability.

“Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.”

Source: Driving to Safety by RAND

Then how do we get to reliable autonomy?

  • We build within hedge cases, rolling out AVs in constrained environments. Companies like Uber, Lyft, or Zoox roll out on a city by city basis, operate within constrained borders, lowering the technical barrier of autonomy. This could work for the Ubers of the world, but traditional OEMs will focus on bridging the gap to a shared mobility future by continually releasing advance autonomy features in their vehicles (i.e. Tesla Autopilot), which will be expected by consumers to be less constrained than a single-ride vehicle.
  • We bypass the traditional technological approaches of today which require large amounts of data, and instead build models able to reason and learn with little data. Gary Marcus, who’s company was acquired by Uber last year, has spent years researching this, however this type of learning hasn’t materialized in AVs yet.

And then there’s simulation

Simulations (from software to hardware to environment), when constructed properly, allow companies to train and test their models on:

  • A variety of world scenarios including traffic, driver behavior, weather, road environment, and more.
  • Multiple sensor suites/arrays. How many LiDARs do I need? Cameras? Radars? Where on the car should they be? Which model hardware do I use?
  • With multiple, scalable random permutations. Without having to put a fleet of cars on the road with safety drivers in tow.

But we’re not totally there yet

Today products such as Vires, TaSS PreScan, CarSim, Oktal ScanNer, ROS Gazebo, and others allow engineers to simulate sensors, conditions, mechanical builds, and more. While each have their benefits, they miss in areas crucial to simulation for AVs, including an oversimplification of existing sensor outputs, as well as a lack of complex understanding of how environments impact autonomous models.

Microsoft and Udacity have also created their own simulators purposefully built for AVs/Robotics. However as the tech stack of AVs continues to progress, we have to think about how to implement, interpet, and test these fusions of hardware + software in a high-fidelity way.

High-Fidelity Simulated Environments

While simulating most sensor perception is hard, the increasingly prevalent sensor on all vehicles continues to be one that is simple to simulate, but difficult to train with.

Optical cameras are being relied upon to deliver vision as the undelivered promise of low-cost LiDARs and a shortage of higher-end units make scalability difficult for OEMs and Tier 1s.

Training data from simulated cameras is only as good as the input, thus to properly test perception, engineers need photorealistic simulated environments. Building a complex photorealistic environment is incredibly costly and difficult, thus nobody has been able to build this for the purpose of training self driving cars.

That’s where people like Craig Quiter come in.

An early email to Craig Quiter, one of the first to build an AV in Grand Theft Auto V

I met Craig almost a year ago as he was posting about something he built called DeepDrive. I later learned that he was one of the early engineers utilizing a game touting $137M of development costs to model real-world scenarios, with high-fidelity graphics…to autonomously drive cars.

DeepDrive: Craig Quiter’s self-driving car simulator for GTA V

A few months later, Craig joined a small startup called Uber, focusing on simulation.

A team at Princeton detailed the advantages of using GTA V in their paper quantifying the world as 100 square miles, 4M people, 262 vehicles, 1,167 different living beings, 14 weather conditions, and over 70,000 dynamic road segments in both urban, rural, dessert, and woodland environments.

Are simulated miles really useful?

There are two conflicting trains of thought about the utility of simulation for AVs.

Yes.

Simulation can be used for testing rare cases and baseline data. Rare cases (non-scientific term) are scenarios that are difficult to recreate or random enough that we can generate them with enough permutations, but aren’t so random that with billions of miles simulated, they won’t show up. If we are on a journey to 99% reliability, simulation could theoretically get us a large portion of the way there, with some future iteration of AI/ML allowing our models to react to extreme edge situations without specific prior data.

Leaving rare cases out of the mix, simulation is great for building a base dataset and continually testing it in a scaleable way. Want millions of miles of highway auto-generated across multiple unique permutations? Let’s run those on the cloud for you. This is important as a wave of new OEMs, Tier 1s, and startups seek to build out autonomous systems.

Related reading: Play and Learn: Using Video Games to Train Computer Vision Models

No.

The counterpoint is that simulated environments aren’t good enough to properly train a model. Garbage in, garbage out. Often this is a statement of the environment-to-vehicle interaction not ever being replicated as it would in the real-world, or the graphical fidelity being oversimplified.

A transfer or virtual image input to realistic from the paper Virtual to Real Reinforcement Learning for Autonomous Driving

To help counter some of the issues around quality of data, researchers are testing the ability to transfer virtual image inputs into realistic ones for improved model training.

While many government agencies aren’t yet willing to count simulated miles as part of the required # of miles driven autonomously, this could change as regulations around testing become more defined. Google has previously lobbied for this.

Simulation Is Necessary

I’m of the belief that simulation is valuable if the fidelity is high enough. Admittedly, simulation probably won’t solve the holy grail of the final 1% of autonomy, but if AVs are going to be realistically implemented, the time to 90% competency could be shortened, and maybe even the next 5–8% could allow models to more appropriately recognize or react to a wider range of scenarios.

And many companies would agree with me.

Some of the top companies in the world have managed to build significant IP in their simulation tech, with Google driving 3 million+ simulated miles per day, while others including Tesla, Zoox, Comma.ai, Drive.ai, and Aurora Innovation, all are actively hiring simulation engineers.

Outside of Autonomous Vehicles

The use-cases for properly simulated environments expand far outside of AVs. While we can understand how AVs perceive the world around them, we also can theoretically better understand traffic, routing, driver behavior, and even pedestrian behavior.

Then we can take another step back. With enough specific modules and dynamic beings in a simulated environment, we can better understand how all types of robots (cars, delivery robots, social robots, etc.) will interact autonomously with our real (and digital) world.

Companies like Improbable have taken aim at this large-scale world simulation problem, and investors have recognized the wide-reaching value in being the architect to our future simulated worlds recently with a $502M financing to the UK-based company.

We’re just starting to scrape the surface of simulation for AVs. Multiple companies are building internally, a few have startups have begun building standalone software, and with the amount of research being done, I expect a variety of new entrants into the market. Those that figure out simulation could be the early leaders or enablers in better understanding the first, second, and third order effects for enabling autonomy.

If you’re working on something in the simulation space or have any thoughts please don’t hesitate to reach out via twitter or email.

--

--

Michael Dempsey

Want to live in a better future so investing in Frontier Technology @compoundvc. Learn more @ www.michaeldempsey.me