How to Build Maps for Self-Driving Cars (End-to-End)

On HD Mapping — an Introduction to Self-Driving Cars (Part 2)

MT
MT
Apr 26 · 7 min read
Image source: Intellias

The ability to get from one place to another, anywhere in the country, is one of the most remarkable gifts of our modern technology. Maps represent the real world on a much smaller scale. They help us travel from one location to another. They help us organize information when we plan for trips. They present us information about the world in a simple, visual way.

Maps are so useful that our dependence on automated directions like Google Maps has quite eroded our ability to navigate for ourselves — at least for me. GPS-enabled smartphones are typically accurate to within a 4.9 m (16 ft.) radius under open sky. However, their accuracy worsens near buildings, bridges, and trees. This meter-level accuracy would be terrible for safety critical use in applications like self-driving cars, e.g. when it needs to park itself or navigate in high traffic city streets.

Meter-level precision based on 2016 sampled statistics from high-quality, single-frequency GPS receivers. “Navigation maps accuracy of ≤1.891m (6.2 ft), 95% of the time” (GPS.gov).

HD Mapping for safety and accuracy

For safety’s sake, High Definition (HD) maps are needed.

The biggest advantage of HD maps in comparison to navigation maps, or GPS, is its capability to provide an accurate representation of the road ahead and information on the surrounding environment, such as traffic lights, speed limits, or left turns.

The use of HD maps improves sensor perception, enables precise localization, and improves path planning to safely execute every maneuver.

How self-driving cars navigate using HD maps

The height of a curb, width of an intersection, and the exact location of a traffic light or stop sign are among the details we can get with HD maps.

In the example of Waymo’s self-driving cars that heavily use Lidar:

“Before we drive in a new city or new part of town, we build a detailed picture of what’s around us using the sensors on our self-driving car. As we drive around town, the lasers send out pulses of light that help us paint a three-dimensional portrait of the world.”

This provides the car with useful information such as:

  • Distance and dimensions of the road features, ie. based on the amount of time it takes for the laser beam to bounce back to the car’s sensors.
  • Categorization of interesting features on the road, such as driveways, fire hydrants, and intersections.

This level of detail helps with the car’s localization within 10cm of accuracy, to know exactly where it is in the world. As cars drive autonomously, the software matches what the car sees in real-time with the built-in maps, allowing the car to know its position without having to rely on GPS, or a single point of data such as lane markings, to navigate the streets.

Example of various road features that are available in Apollo HD Maps.

With HD maps, we get to store information of the more permanent road features — so that the sensors and software can focus more on the moving objects, like pedestrians, vehicles, and construction zones. This is where computer vision would complement the HD mapping in self-driving cars’ navigation systems. For example, to detect signs of construction (orange cones, workmen in vests, etc.) and understand that we may have to merge to bypass a closed lane, or that other road users may behave differently.

The combination of these technologies allows autonomous cars to recognize new conditions, make adjustments in real-time, and perform a better job of anticipating and avoiding tricky situations.

High-precision HD mapping sees beyond GPS and smart sensors, enabling real-time decision-making capabilities that cars need to attain full autonomy.

End-to-end production of HD mapping

Now that we understand why and how HD mapping is used in a self-driving car application, let’s dive deeper into an end-to-end process of how to generate high-definition maps.

Source: Udacity’s Self-Driving Fundamentals featuring Apollo

In the application of Baidu’s open source self-driving software, Apollo, HD maps are at the core of the platform, making it a core dependency for all other self-driving modules. The construction of high-definition maps is composed of 5 processes: data sourcing, data processing, object detection, manual verification, and map publication.

1. Data sourcing

In self-driving cars, it is inevitable that we deal with enormous amounts of data. The hardware generates tons of data since it’s vital to know exactly where a vehicle is and what’s around it for safety.

Data collection using survey vehicles are critical to construct, maintain, and update maps in the production of HD maps, as roads change constantly. Hundreds of survey vehicles are used, and each collects sensors and signals data from the following hardware as input source:

  • LiDAR, for “light detection and ranging” — that bounces anywhere from 16 to 128 laser beams off approaching objects to assess their distance and hard/soft characteristics and generate a point cloud of the environment.
  • GPS — that locates the car’s location in the physical world at inch-level.
  • IMU, for “inertial measurement unit,” — that tracks a vehicle’s attitude, velocity, and position.
  • Radar — that detects other objects and vehicles, including their speed.
  • Camera — that captures the environment visually. The analysis of everything a camera sees requires a powerful computer, so work is being done to reduce this workload by directing its attention only to the relevant objects in view.

Intensive data fusion of all the above input ultimately generates high-definition maps.

2. Data processing

Once data is collected and fused, it is then sorted, classified, and cleaned to get the initial map template. The initial template should be free from any semantic encoding or annotations.

Source: Udacity’s Self-Driving Fundamentals featuring Apollo

The above process is what we call as ‘data processing’.

The image on the left is an example of registered point cloud fused from the collected and processed data in Beijing, China.

3. Object detection

For object detection, computer vision algorithms such as Convolutional Neural Network (CNN) is used for heavy training and testing to primarily detect and classify static objects. Predicted and classified objects include lane lines, traffic signs, poles, and even hill crests on the roadside.

Having dealt with thousands of customers’ issues in the self-driving world over the past year, I have seen the importance of fail-safe detection of lane markings for advanced driving assistant systems in ensuring road safety. This is especially true with unexpected weather and lighting that can affect cameras’ visibility, and therefore object detection.

Source: unece.org

In particular:

  • Camera technology is used to identify lane markings — safety is dependent on the visibility of road markings.
  • The quality of the lane markings’ optical properties is important for safety and will be even more critical in future.
  • Adverse weather conditions and worn-out road markings still pose great challenges to camera sensors.

“But like the human eye, the technology cannot work effectively if it cannot see the road markings and traffic signs if they are worn out or hidden, or if they are confusing.” — EuroRAP, EuroNCAP

4. Manual verification

Manual validation involves tagging and editing the map. Sometimes certain locations are not yet up-to-date with today’s fast-changing infrastructure, like missing a highway in South Australia, or inaccurate representation of coordinates.

Manual verification is an important step to ensure accurate localization of self-driving cars. Quality control of HD maps, however, remains a major challenge as it is difficult to produce error-free maps. There is incredibly high complexity of road environments — in different parts of the world — and their dynamic nature.

5. Map publication

After data sourcing and processing, object detection and manual verification, the HD maps are ready to publish!

In addition to HD maps though, self-driving teams also use localization maps with a bird-view or top down feature to help with accuracy, road feature updates, and localization tasks. For Apollo’s HD maps, updating maps is greatly accelerated with the use of crowdsourcing and releasing public data.

Tesla FSD beta visualization. Source: tesmanian.com

Concluding thoughts

With the aforementioned end-to-end production of high-definition maps, building an HD map is indeed a complex task with several aspects including sourcing data, collating data to build the map, integrating artificial intelligence (AI) capabilities, provisioning and validating with road data, and publication that is coupled with ongoing maintenance and improvements.

HD maps can support (1) navigation assistance, (2) driving assistance, and (3) automated driving. These maps have sufficiently precise road information to help self-driving cars identify road signs with a centimeter accuracy. They also provide real-time traffic data on dynamic objects on the road, such as other vehicles, pedestrians, cyclists — all of which help avoid accidents in critical situations through quick response times.

I hope this article gives you great insights to the core part of developing a self-driving software. Now, how would you map the future of autonomous driving? Follow along this journey — Introduction to Self-Driving Cars series — on our publication page.

Previously in this series:

Self-Driving Cars

A publication covering news, predictions, and opinions…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store