The Nuro Autonomy Stack

Albert Meixner
Nuro
Published in
8 min readMar 18, 2022

A deep dive into Nuro’s autonomy software

As the head of software at Nuro, it’s my responsibility to ensure the software powering our vehicles is designed and validated for safety first. My team and I have developed software that is able to tackle the monumental challenges of deploying our autonomous vehicles at scale.

Much of that scaled deployment requires gathering data from the real world so our autonomy stack knows how to appropriately respond to any situation. To that end, we’ve been operating day and night for many years in various neighborhoods to ensure our technology generalizes. But that’s not all that’s required to deploy safe, reliable technology into the world.

In the video below, I outline our autonomy stack and how we designed our software around Nuro’s zero-occupant vehicle, how we validate an autonomous vehicle to safely deploy on public roads, and how we will go from a small fleet of vehicles to commercial scale. Continue reading below for a summary of the content in the video.

Design of Nuro’s autonomy stack

Nuro’s autonomy stack is designed around our goods-first approach, and we leverage this unique concept throughout the design of our software and our hardware.

With a goods-first approach, we’re able to prioritize safety over comfort by allowing for higher recall. That ensures we detect unknown obstacles and can support more aggressive stopping maneuvers not typically available for passenger vehicles. The safety of other road users always takes precedence.

We’re also able to take advantage of a machine learning (ML)-first stack because we can make use of a robust fallback architecture that doesn’t have to worry about the comfort of onboard passengers. Prioritizing ML means our stack can continuously improve as we scale and gather more data. At the same time, we know the robust fallback system always has a contingency plan for any scenario that the ML stack might have trouble with.

Being zero-occupant also enables more use of remote operations to handle edge cases without impacting the customer experience. For instance, the autonomy system can bring the bot to a stop in a safe location and wait for a remote operator to take over.

Lastly, our custom-designed autonomous delivery vehicle has unique advantages over a full-sized passenger vehicle. We’re able to leverage the smaller size of our robot to navigate narrow neighborhood roads and give more room to vulnerable road users, such as pedestrians and cyclists.

Mapping & Localization

Showing how a high-resolution map is generated: first by selecting minimal routes, then by gathering data with sensor stacks
Beginning by selecting minimal routes, then gathering data using our sensor stack

We build our own high-definition scalable maps for the areas in which we plan to deploy. They are generated via an automated pipeline running in the cloud that leverages a combination of robotics, ML, and humans to build and verify the maps. They contain all the necessary road features, such as lanes, traffic lights, and crosswalks.

This offline-built map is highly detailed down to the centimeter level, yet the pipeline is scalable and operationalized to support future delivery areas and cities.

Perception

Using machine learning and geometric approaches across different sensors to detect objects
Using machine learning and geometric approaches across different sensors to detect objects

Our perception system utilizes sensor data to detect and track objects using an ML-first architecture with robust fallbacks. The state-of-the-art perception ML models are able to detect and track traffic participants (vehicles, pedestrians, etc.) as well as label other semantic classes such as drivable surface, debris, foliage, and anything else that isn’t easily classified as objects. This is all done using ML across all sensor modalities.

If all of the ML detectors fail to detect an object/obstacle, we can fall back to our purely geometric reasoning layer, in which we identify where we can drive and what is a potential obstacle.

This ML-first approach with robust fallbacks in the perception stack combined with our multiple sensing modalities makes our system capable of handling adverse conditions such as rain, fog, sun glare, and smoke.

Showing our vehicle using a multi-sensor approach to ensure reliable detections, even when driving at night or in the rain
Our multi-sensor approach ensures reliable detections, even when driving at night or in the rain

Prediction & Planning

In order to understand and react to the other agents in the world, our system has to understand the intent of those objects and predict what they are going to do in the future, similar to a human driver. To accomplish this, our prediction system generates multiple hypotheses for each agent while also taking into account how our actions may influence their behavior. We rely heavily on ML for these predictions but build in fallbacks based on vehicle dynamics and map information.

Showing the bot predicting possible scenarios and preparing for each of them
To determine the safest option to proceed, we use multihypothesis predictions and roll each of them out 10 seconds into the future.

To be robust, our prediction stack must also be able to detect and handle situations in which other agents are not behaving sensibly or predictably. For example, the image below shows a few cyclists circling our vehicle. In this scenario, we are able to detect when the agents are not matching our predictions and ensure the system is more conservative. Given we don’t have passengers in our vehicle, we can afford to prioritize safety and stay conservative for longer periods of time.

Showing an edge case in which the bot is circled by children
Our bot needs to account for edge cases, such as when children begin circling the vehicle

In addition to reasoning the way a human would about what we can see, we have to reason about what we cannot see, such as blind corners due to occlusions. In the below scene, our autonomy stack is able to reason about potential cars coming out of occlusions and proceed cautiously by creeping until our vehicle has better visibility.

The bot proceeds cautiously, modulating speed until it has clear visibility before continuing forward
The bot proceeds cautiously, modulating speed until it has clear visibility before continuing forward

Putting it all together

With the technology we’ve built into our autonomy stack, Nuro’s vehicle is able to safely navigate challenging scenarios that we encounter on a daily basis. In the video below are a few examples of challenging scenarios

Robust system validation to ensure safety

Safety is our top priority, and we’ve designed our validation process around ensuring our autonomy stack is as safe as possible before driving our vehicles on public roads.

Requirements-driven process

Our validation process process starts with generating a set of requirements for each part of our system. With our goods-first approach, we are able to target hyperlocal deployments and limit exposure to other areas. Rather than defining generic requirements and trying to solve autonomy for all scenes, we define requirements tailored to our intended operating area. We can focus precisely on the things that are most relevant for a given deployment and optimize for safety and time to deployment. We have already exercised this process to deploy across multiple different states.

Validation methods

We use multiple methods to verify we are meeting system requirements for our intended operating area. To start, we leverage re-simulation of all of our onroad logs to verify the autonomy stack can handle the expected environments for deployment. Using those logs, we’re able to artificially change the scenarios to show that the system is robust enough to handle different and more challenging scenarios. We then augment re-simulation of logs with full synthetic simulations to verify more difficult complex scenarios you likely won’t see on the road.

In addition, prior to testing on public roads, we leverage closed-course testing to ensure our simulation matches real world scenarios. In closed-course testing, we can reenact our synthetic scenes using remote controlled mannequins and robotic cars, then compare them to a simulation of the exact same scene. This is especially critical for complex scenes such as right-of-way violations that stress the algorithm, compute, and vehicle platform.

Scalability for mass robot deployment

To efficiently scale for mass deployment across multiple cities, we designed our autonomy stack to improve by simply adding data and ensured that our fleet management and eCommerce platform can quickly scale to match.

Scaling autonomy

On the autonomy side, scaling means enabling continued rapid progress on capabilities of the stack to unlock new geographic areas and operating conditions. A key to scaling autonomy is scaling ML. ML models and AV systems in general are only as good as the data used for training, validation, and testing. While common objects and scenarios are easy to gather, unusual things that might be easy for a human are difficult for an ML-based system that has never seen it before. In addition to our robust fallback system, we actively mine and identify these unusual things in any data we’ve collected on the road. These issues are fed back into the training and evaluation process to ensure the unusual scenes or objects are handled correctly, all without re-collecting new data or on-road testing.

Remote management fleet operations

We’ve designed our third-generation vehicle to be operated and maintained remotely with minimal human touches. We plan, track, and maintain the fleet remotely, from early morning deliveries to returning to the depot at night. We’re able to remotely download logs, diagnose issues, and install new software. This infrastructure allows us to scale the fleet for mass robot deployments.

An autonomy stack to better everyday life.

Our approach to autonomy has followed the meticulous nature of the scientific method: we hypothesized that our methods would lead to more productive safety outcomes, then we tested to verify. We started with an ML-first autonomy stack that emphasizes safety over comfort, then tailored our validation process requirements to our precise operating area for maximum safety. Finally, we continued testing and perfecting with a remote-managed fleet that minimizes human touch and scales for mass deployment. The result is an autonomy stack purpose-built to be an asset to the communities in which it operates, greatly decreasing the hazards of neighborhood roads, and we’ll never stop iterating for greater safety and efficacy.

If you’re interested in joining our mission to better everyday life through robotics, apply to one of our open roles — we’d love to have you on the team.

--

--