How Embark Powers its Segmentation with Luminar Lidar

Gilbran Alvarez
Embark Trucks
5 min readNov 3, 2021

--

Cutting-edge full lidar segmentation allows Embark to detect cars, cones, and road shoulders with centimeter-level accuracy.

Different colors represent different object classes as noted in the clip above. This perception data feeds directly into the Embark Driver to identify and distinguish objects and inform planning and action within the self-driving system.

Today, Embark announced an initiative to equip its test fleet with Luminar’s next-generation long-range lidar architecture. This gives Embark access to Luminar’s highly robust and accurate long-range lidar sensors, and will improve the Embark Driver’s perception range and capabilities.

As the longest-running self-driving truck program in the US, Embark has always focused on developing software that’s purpose-built for trucks. But our software is only as powerful as the data we get from our hardware. That’s why we’ve developed long-standing relationships with our hardware partners to ensure only the best technology is integrated into our self-driving platform.

One of the most safety critical parts of our hardware stack is lidar, which enables our trucks to perceive the world around them in 3D and informs safe actions and reactions to situations on the road. We’ve been testing our software on Luminar lidar sensors for the past three years, a partnership that has unlocked perception capabilities we’ve not encountered anywhere else in the industry. Further, billions of dollars have been invested into lidar companies like Luminar, so we firmly believe that leveraging the best technology from industry partners is a significantly better use of capital than building prototype lidar systems in-house.

Luminar’s lidar is particularly well-suited for self-driving trucks, and Embark is able to use data from Luminar sensors to achieve industry-leading segmentation and build the most powerful self-driving software for trucks.

Let There be Light

The team at Luminar has spent the better part of a decade pioneering and perfecting a unique lidar architecture that is scalable and purpose-built to enable the detection, segmentation, tracking, and classification of objects at ranges sufficient to enable safe, reliable autonomous trucking.

Our trucks rely on the range, resolution, and high fidelity of Luminar’s lidars to reliably see other vehicles, lane markers, traffic lights and signs, accidents, and countless other road features at long distances.

Range performance in this context may seem intuitive, but for autonomous trucks, range isn’t just about how far the truck can see, but rather how far it can understand reliably–and that is an important distinction when it comes to safely operating autonomous trucks.

Luminar’s lidar enables simultaneous detection of the road surface, how it is organized into lanes, where obstacles and fellow road users are within those lanes, and landmarks for map localization. All of this throughout a 120° field of view that allows understanding of the environment everywhere the truck might want to drive. The sensor’s range (up to 600m) and resolution (in excess of 300 points per square degree) differentiate Luminar from other producers, but it’s the fidelity of the data that really unlocks reliable perception.

Fidelity and Reflectance

Sensor fidelity — the correctness of the sensor’s environment representation — is essential for the efficient and reliable perception of objects, especially a 3D scene. Luminar lidar delivers range precision (on the order of a single centimeter) that ensures surfaces are continuous and repeatedly represented. Beyond 3D geometry, Luminar lidar also provides a measure of each point’s returned energy calibrated for range, called reflectance. This provides an additional, consistent data point that supports segmentation of object features that do not have different physical structure from the surface around it — for example, the detection of lane lines. Many lidar sensors provide reflectance data as a point cloud attribute, which is generally a very coarse metric only useful for segmentation of road signs and perfect lane markers on dark roads.

Better Segmentation

Years of testing and development enabled Embark to gather massive amounts of data and build machine learning capabilities that directly leverage Luminar’s sensors. In particular, Embark has been using Luminar’s technology to perform real-time, granular segmentation and object detection.

Lidar segmentation assigns each individual lidar return to a specific class and allows us to differentiate classes of objects. In other words, for each full point cloud the lidar sensor provides, we are able to understand which individual lidar returns come from which “class of interest,” such as vehicles, pedestrians, lane lines, road surface, road shoulder, signage, barriers, cones, debris, and more.

With Luminar outputs and the Embark Driver’s deep learning capabilities, we’re able to perform a full segmentation of lidar data, a milestone achievement which leads to a safer and more performant self-driving system. Coupled with our camera and radar sensor suite, our use of Luminar lidars gives the Embark Driver redundancy in our object and scene detection capabilities.

In the video above, lidar segmentation allows Embark-enabled trucks to detect object classes like lane lines and feed data into our proprietary patent-pending Vision Map Fusion (VMF) system, which combines real-world detected lane lines with lane lines in our HD maps. VMF is Embark’s differentiated approach to mapping, and relies on a sensors-first architecture that analyzes and responds to real-time perception data, rather than purely utilizing HD maps to direct the truck.

You can see Embark’s advanced segmentation identifying various object classes beyond lane lines, such as signage and cones, allowing the truck to recognize and react to virtually any situation on the road.

Ahead of the Curve

Others in the industry are looking to build Frequency Modulated Continuous Wave (FMCW) lidar with the goal of improving their object detection capabilities. Once fully developed, FMCW promises to detect point cloud velocities, with the end goal of improving object detection at the lidar level.

Embark is confident that its segmentation approach, supported by automotive-grade sensors like Luminar’s, is ahead of the competition. While many others are building around FMCW in their “next-generation” lidar systems, Embark’s technology is already performing advanced object detection today by combining its perception software stack with Luminar lidars.

We’re proud to have worked with Luminar to develop the self-driving truck industry’s most granular segmentation capabilities. By bringing together the best of AV truck software and the best of lidar technology, we’re able to bring safe, performant self-driving truck software to market at scale.

To learn more about Embark’s partnership with Luminar, read our press release.

--

--