The Decision Behind Using Articulating Sensors on Cruise AVs

JM Fischer
Cruise
Published in
8 min readJul 1, 2020

Written by JM Fischer, Engineering Manager, Embedded Systems and Charlie Mooney, Senior Software Engineer, Embedded Systems

Cruise self-driving vehicles use sensors as the eyes and ears of the vehicle. When we introduced the Cruise Origin earlier this year, we showcased an articulating sensor that has the ability to pivot 360 degrees and “see” in light or pitch black at superhuman speed. Our sensors will be able to see beyond what’s humanly possible and react faster and more safely to the things that people are able to see.

Cruise co-founder, CTO and President Kyle Vogt introduces the articulating sensors on the Cruise Origin during a live event.

In this post, we’ll dive into some of the software we created for the articulating sensor on our existing fleet — the Articulating Radar Assembly (ARA). The ARA is one of many exciting pieces of our vehicles that our team interacts with everyday. Let’s start with a high level view of what the ARA is and why we use it.

Two Articulating Radars sit in front of the A-pillar on both sides of our Track 3 vehicles. They each hold a radar that our self-driving software uses to intelligently look ahead (and back, and to the side) during various maneuvers. We use the articulating radars to drive safely through certain maneuvers, such as unprotected left hand turns.

An archival clip of a Cruise self-driving vehicle performing an unprotected left turn.
An archival clip of a Cruise self-driving vehicle performing an unprotected left turn.

During an unprotected left turn, oncoming traffic may approach at speed (e.g. approaching a green light) and have the right-of-way. In order to ensure the path is clear for sufficient time to complete the turn, we use a long range radar to look far down the road and scan for any approaching vehicles. This is especially valuable in bad weather conditions where camera or LIDAR-based solutions fall short.

Why Articulating Radar?

Before we get into the technical details of getting the ARA to work, let’s take a moment to think about what Long Range Radars (LRRs) are, how our AVs use them, and why we chose to use the articulating radar and add the extra complexity of moving parts.

LRRs are special among radars in that they are very directional — like zooming in all the way with your camera. They allow you to see farther away at the expense of a smaller field of view. You can detect far away objects with great detail, but may completely miss something just one degree to either side.

The AV needs to be able to sense distant objects clearly, but those objects are not always in the same spot. High-resolution, low-latency, long-distance sensing over a large area is not easy to achieve in general (although we’re bringing that capability to the Cruise Origin). Solving all those requirements together makes for a quite demanding task.

Imagine that you’re a wildlife photographer trying to snap pictures of birds — you need the highest quality pictures possible of distant objects in extremely dynamic settings. This is quite a challenge, and one surprisingly comparable to the one we’re solving with the ARAs.

There are two basic approaches to solving a problem like this:

  • You might set up many sensors, all pointed at a multitude of different angles. If this is done carefully, you can cover a very large area and it could work very well. If the whole area where the AV might need to detect an obstacle is covered by at least one sensor at all times, you can’t miss. Exactly how many you would need depends on how much resolution is needed and at what distance the targets are expected.

This approach has the obvious benefit of never missing your shot, and it’s very simple to use and maintain. You’re effectively building one mega-sensor.

The downside of this kind of solution is just as obvious. You might need a lot of sensors to cover the whole area. Many problems scale with the number of sensors. Cost concerns, developing a network that can handle the sheer amount of data being produced, and even just physically placing the sensors on an AV can easily be showstoppers.

  • Alternatively, you could set up one sensor, but move it quickly to point in the direction you’re interested in at any given moment. This manifests in this case as an articulating radar.

The general benefits and downsides are completely flipped now. You only have one sensor to deal with, but higher technical complexity, including an entirely new system that needs to decide which direction to point. You also need to develop reliable actuators to do the pointing.

At Cruise, we have developed a sensing solution that we believe strikes a balance between these two approaches. Our system has three ARAs on Cruise’s current AVs: each one consists of a radar on a motor that can rotate it left and right to make sure it’s always pointed in the best direction.

Springing Into Action

With more information about the hardware and purpose of this system, let’s look at the software and logic behind it. Before and while undertaking a maneuver requiring the ARA, the self-driving brain sends commands to keep the ARA pointing in the right direction — but how does this process get started?

The Embedded Systems team handles the first layer of software that runs on our vehicles. We connect the AV to reality, bridging the gap between the AV’s sensors and actuators and the highly sophisticated self-driving stack — all while maintaining stringent automotive-grade quality.

Using their suite of sensors and a highly detailed map, our vehicles know where they are in the world. This is called “localization”. Then, our planning and routing systems look ahead and determine where the vehicle should go. Combining this information, the software looks for upcoming traffic patterns, which commands our radar system into action.

As demonstrated by the red arrows, radar assemblies activate on a Cruise AV during an unprotected left turn. The ARAs monitor for potential hazards in various directions before returning home after the turn is completed.

With a specific notion of where to look, Embedded Systems sends commands to point the ARA in the designated direction. We maintain the software in the self-driving brain that acts as the bridge from the conceptual world of pure logic to the physical world of metal, wires, rubber, and plastic — the “ARA Bridge.”

The brain of the AV decides when and where it needs additional radar data from the ARAs, and transmits that request to the ARA Bridge. We take it from there, interpreting the request and using the current location and pose of the vehicle to determine the perfect angle to turn the motor. This is a critical application, and an extremely challenging task. Numerous considerations go into making these computations as precise as possible. For instance: assembly tolerances, sensor and motor controller versions, the speed and direction of the AV, all factor in. All this happens before a single command reaches the actual ARA hardware.

For the command to reach the electric motor, it must traverse a path through the vehicle’s electrical system to reach the motor assembly. This path crosses multiple protocols (CANOpen, Ethernet, and IPC mechanisms) and flows through several electronic control units before the motor actually turns. This pathway is owned by Embedded Systems and it’s our job to make sure every command makes it to the other end accurately and reliably. Positional encoder information, error states, debugging info, and diagnostic data needs to be transmitted back over this same path to the AV brain as well.

An additional challenge that we tackle with the ARA system is integrating with upstream perception and calibration subsystems. Vehicles, bicycles, and pedestrians are different at each new intersection and often moving quickly. It’s critical that we not only detect them rapidly, but also provide positional data back to our perception systems accurate within milliseconds and fractions of a degree. We also need to quickly detect and react to faults, such as corrupted packets. On the Embedded Systems team we work to ensure our systems are both highly performant and highly robust.

Managing moving parts and machine reliability

Our vehicles are on the road every day, which means we rack up a lot of miles. The decision to go with a solution that includes moving parts was not made lightly. While static solutions generally are robust, the modern automobile industry clearly demonstrates that with proper maintenance, diagnostics, and design moving parts can be used safely.

With safety being the top priority across teams at Cruise, the Embedded Systems team writes and maintains software that evaluates if the ARAs are in good condition from the moment the vehicle is started.

At startup

Every time our self-driving brain boots up, Embedded Systems software runs through a battery of checks before we energize the ARAs, including:

  • Is the entire communication path from the main computer to the motor controller and all intermediate steps connected and reachable?
  • Is the motor properly calibrated?
  • Can the ARA move through its full range of motion?
  • How much resistance is encountered when moving the ARA? Is it possibly broken, jammed, or wearing out?
  • …and many other tests and diagnostics to make sure the system is safe to operate.

During runtime

After the initial checks are complete, our software starts a monitoring system which continuously checks for any actual or potential fault conditions. These could include:

  • Loss of communications with the intermediate gateways along the communication paths or the motor controller itself
  • Over or under voltage
  • Temperatures of various components out of range
  • Operating currents on the motor growing too high
  • …and many others

If any of these runtime parameters enter a fault condition, we take the appropriate action. Our sub-system will look at each fault and react by sending a message to the vehicle’s brain. Each potential fault is mapped to an action to take which could range from, “return to the garage after this ride ends” to “pull over immediately” depending on the severity. We call these “degraded states”, which are interesting and broad enough topics to warrant their own post.

Looking ahead to a clear path

We purpose-build our sensor suite to be modular, allowing us to take full advantage of the best sensors available while also being inherently flexible and evolving as technology continually improves. This is another way we separate ourselves from vehicles currently on the road today. We plan to be able to upgrade our hardware in the same way we do with our software. More on this to come.

Join us

Want to get paid to think about how a self-driving vehicle might react if a sensor fails? How to distribute sensor data to multiple ECUs as efficiently as possible? How to tune our embedded OS kernel to be as efficient as possible? How the vehicle might detect and react to ensure all our sensors & ECUs work while driving in the rain, sand, or snow?

The Embedded Systems team lives in C, C++, & HDL in the world of: ECUs, FPGAs, ASICs, bare metal, Linux kernel, HALs, device drivers, and all sorts of embedded applications. We often find ourselves discussing the physical, data link, network, and transport layers, using deeply embedded interfaces such as CAN, I2C, 100BASET1, UARTs, LIN, as well as high-speed interfaces such as FPD-Link, PCIe, and 10GBASE-T.

We are hiring! Come join us in solving new challenges every day.

--

--

JM Fischer
Cruise
Writer for

Senior Engineering Manager, Embedded Systems at Cruise, 13+ years in software development