The virtual road ahead

Jeff Garnier
Kodiak Robotics
Published in
8 min readAug 11, 2020

--

Using simulation to train the Kodiak Driver.

One of the advantages of being a relatively young AV startup is that we have built Kodiak so we can take advantage of the growing self-driving ecosystem. A decade ago, when the industry first began, developers largely needed to build everything they needed for their self-driving systems themselves, from sensors to actuators to software. But today, companies like Kodiak have access to a large and growing ecosystem of high-quality off-the-shelf products.

Our development philosophy also reflects the progress the industry has made. Self-driving developers have noticed a paradox: the easiest driving also happens to be the most common, and the most complex and even dangerous situations, called edge cases, are (mercifully) the hardest to find. This paradox presents a challenge: demonstrating the safety of your self-driving vehicle requires more than just driving a ton of miles — it requires showing that your system can handle a comprehensive set of edge cases safely. But safely testing your system in those edge cases is virtually impossible.

That’s where simulation comes in. At Kodiak, we use a detailed simulation platform — not on-road testing — to test the Kodiak Driver’s ability to maneuver complex edge cases. This simulations-first approach allows us to make more progress faster than we could through just on-road testing. The Kodiak Driver can practice many more complex situations in a few minutes of simulation than it can in many hours in the real world — that simulation is safer and less expensive than on-road testing is just a bonus.

Just as the last few years have seen the growth of a robust sensor ecosystem, over the past few years a third-party simulation ecosystem has grown up to support AV developers. Rather than spend millions building our own simulation platform, we instead chose to partner with Applied Intuition, one of the leaders in this new industry. Unlike some simulation tools, which were built on top of platforms designed to generate images for movies and video games, Applied’s system was purpose-built for AVs. We think this makes Applied one of the most flexible and powerful simulation platforms available to anyone.

To understand how we use simulation in practice, take this real-world situation we encountered on I-45 between Dallas and Houston. In this video, you can see a vehicle suddenly pull-over from the middle lane and stop on the shoulder. The safest thing to do in such a situation is to “move over” to the center lane, giving the vehicle on the shoulder extra space. Our team spent many hours training the Kodiak Driver to move over for vehicles on the shoulder, and tested it extensively in simulation and on test tracks before finally introducing it onto the truck for real-world testing.

As you can see in the video, that’s what the Kodiak Driver did, but it did so somewhat belatedly. We learned that this delay occurred because our perception system was not incorporating long-distance measurements as well as it could. As a result, the Kodiak Driver didn’t see the sudden pull-over until the car was already on the shoulder, giving it less time to change lanes.

Simulation helped us train the Kodiak Driver to handle sudden pull-overs. Using Applied’s simulation platform, we were able to recreate the sudden pull-over in what’s called drive log re-simulation. Re-simulations use actual real-world data to virtually recreate realistic driving scenarios. When the Kodiak Driver navigates a re-simulated environment, it doesn’t know it’s not actually driving — the software receives all of the same inputs as it would on the road, and behaves exactly as it would on the truck.

Once we created the re-simulation, we were able to make improvements to our long-range perception system, and then test those improvements over and over again using real-world data. (We can validate the accuracy of the simulation platform by ensuring that the Kodiak Driver behaves the same in simulation as it does on the road.) As you can see from the below video, these improved algorithms worked: the Kodiak Driver was able to detect the sudden pull-over from further away and execute the lane change earlier, just as a skilled human driver would. (The yellow ghost truck shows the original movement.)

We learned from this simulation that our new perception algorithm exhibits safer behaviors in response to situations like the above sudden pull-over. But how do we know that this change won’t cause problems in other situations? Again, we use simulation. By running our new code against our broader corpus of driving scenarios, we saw that our new algorithm did not cause any significant performance regressions. This is a profound insight that would be very difficult to replicate from just real-world testing.

We use simulation in this way to drive our day-to-day development. We make changes to the Kodiak Driver every day, and while we hope that each change makes the system a little safer, there’s really no guarantee that that’s the case — just because Thursday’s build handles the sudden pull-over correctly doesn’t mean that Monday’s necessarily will. That’s why we test each and every code change against hundreds of simulation scenarios, and run every daily build against an even larger set.

Once we’ve seen the sudden pull-over, we can also envision a nearly infinite number of variations on the scenario. For example, how would the Kodiak Driver have responded if there were a vehicle in the center lane, just to the left of the truck? Would it still move over, and risk an accident, or would it know to stay in the right lane? Would the Kodiak Driver behave the same if the vehicle in the center lane were three feet ahead of the truck, or five feet behind? Driving the truck for millions of miles to answer these questions would be a foolish waste of time and fuel, and potentially be dangerous as well. Instead, we again use simulation.

The video below shows a modified version of the simulation you saw before. This time, we’ve added a synthetic vehicle in the center lane, blocking the truck from changing lanes. As you can see, the Kodiak Driver instead chooses to stay in the right lane, and slows down for safety.

These edited scenarios, known as re-simulations with augmented data, allow us to evaluate a much wider range of scenarios than what we’d see on the roads, again at lower risk and lower cost. In many cases, we can use real-world perception data — Applied’s simulation software is capable of adjusting the perception data it feeds the Kodiak Driver to account for the truck’s decision to stay in the right lane.

In some cases, however, we may want to evaluate scenarios that deviate significantly from real-world data we collected. We can’t use re-simulation in situations where we want to generate perception data that deviates significantly from logs we’ve collected when driving. This includes situations where the truck’s actions would cause a real-world vehicle to significantly change its behavior, or changes to environmental conditions like weather or time of day. And of course we can’t re-simulate scenarios we’ve never seen in the real world.

Fortunately, Applied’s platform allows us to simulate fully three-dimensional synthetic environments, and configure the behaviors of other actors. This allows us to experiment with a range of environmental conditions, or even create wholly-synthetic scenarios that we haven’t seen in the real world. The below video shows a synthetic simulation of the sudden pull-over, based on our real-world log data. While this looks like a video game, it’s actually a highly-sophisticated simulation — Applied’s platform actually generates realistic synthetic perception data, down to the pixel level. In other words, the Kodiak Driver is navigating this synthetic recreation of the sudden pullover, exactly as it would in the real world.

Once we’ve recreated the scene, we can test how the Kodiak Driver would respond under different environmental conditions. For example, how would the truck respond at night? Applied’s platform lets us run this experiment. We can even move the camera around, to get a better view of what’s happening. (It looks cool that way, too.)

Synthetic simulation allows us to test the same scene under different environmental scenarios.

While these sensor simulations do not perfectly represent the real world (generating synthetic sensor data that matches the real-world data is extremely hard, let alone in real-time!), Applied has done a tremendous amount of work to test and validate that their synthetic data closely matches what real-world sensors would produce. This allows us to generate strong evidence for how our perception system would respond to a nearly infinite range of scenarios. Working with Applied again gives us access to tools that didn’t even exist just a few years ago.

As you can see, simulation is an incredibly powerful tool for testing the Kodiak Driver, and the importance of simulation to our development process will continue to grow. Over the next several years, we’ll begin to build our safety case, our argument that the Kodiak Driver is comprehensively safe. As part of that safety case, we’ll grow our corpus of edge case scenarios from the hundreds we think about now to literally tens of thousands, culled from what we see on the roads as well as partner data. As always, it’s not the number of miles we simulate that matters, it’s whether we can get sufficient coverage of the full range of driving scenarios we see in our Operational Design Domain. As our scenario corpus grows, Applied’s cloud-based system will allow us to scale, without the tremendous time and expense such infrastructure would have taken just a few years ago.

We’re excited about the road ahead — that’s why we simulate it over and over again.

Safe and sound journeys!

--

--