Localization with Autoware

yodayoda
Map for Robots
Published in
6 min readJul 20, 2021

What you need to know about paths, transforms (TF) and other settings

Localization by NDT matching

In this article, we will talk about how an autonomous vehicle can know its own location. But first, let’s start with a simple example.

Introduction
If you are driving a car in an unfamiliar place, you usually rely on a map to navigate. But when you are close to your home, you don’t get lost. The main difference is that you have memorized landmarks such as roads and stores near your home, so you know where you are driving and how to get home.

There is a part of the human brain called the entorhinal cortex which stores a mental map that supports spatial awareness and navigation.
This article explains the relationship between maps and the brain in more depth.

In this survey (Visualizing Mental Maps of San Francisco), 22 San Francisco residents were asked to draw a picture of their neighborhood or city.
You can see that they are very rough compared to the general map shown to the right, but they are drawing places of value to themselves and things to remember.

(Left) Several mental maps of San Francisco [source]. (Right) Correct map of San Francisco(created by mapbox)

In the same way, autonomous vehicle know where they are by comparing the landmarks they are looking at with a map. And since autonomous vehicles need more accurate maps than humans, they use a sensor called LiDAR to generate highly accurate maps.

Now let’s look at how autonomous vehicles combine their high-definition maps (HD maps) with the environment acquired by their sensors to determine their own location.

Communication of Robots and Sensors

This article is explained using an autonomous driving system called Autoware, which is basically based on ROS (Robot Operation System).

ROS is an operating system for robots built on the publish/subscribe model.
By using ROS, it is possible to connect many sensors to the robot, acquire data (subscribe), and publish the results of control and calculations (publish).

The image below shows an example of object detection used in a typical self-driving vehicle. It detects obstacles from pictures taken by a camera in front of the vehicle to avoid collisions with pedestrians and vehicles while driving.

Example of the exchange of camera data in an object detection process

Connecting Maps, Sensors, and Vehicles

ROS has a library called TF (Transform Configuration) that can be used to set the positional relationships of sensors and other devices.
You can set up the positional relationship between sensors such as cameras, radar, LiDAR, etc., and the vehicle, as well as the positional relationship between the vehicle and the map, and let the robot recognize the configuration of hardware placed in the real world.

Example TF for the translation between world coordinates, the position of base_link (center of the car), and the base_laser (LiDAR). Using the given TF we can calculate the distance to the wall in wall coordinates.

Global Coordinate Transformation

In Autoware, it is basically designed based on the following TF tree:

world: world coordinates (ECEF), latitude and longitude coordinates with the Earth at the center of gravity as the origin. In the ROS standard, the world is called earth frame.

map: coordinates with the origin at the corner of the area where the robot will run.

base_link: coordinates with the origin between the wheels of the robot.

velodyne: coordinates of sensors on the vehicle, such as LiDAR.

Parent-child relationship for world coordinates to sensor coordinates

These layers are called frames, and each frame has one parent frame, and by linking them to the parent’s coordinate system, the range of dependencies can be reduced, and frames can be combined easily even when adding new sensors or using a map of a wide area.

Localization by Referencing a Map

Now that we know how to combine sensors and vehicles by TF, we can estimate the self-position by matching the data observed by a LiDAR with a preliminary map. Here we introduce an algorithm called NDT.

Algorithm for NDT (Normal Distribution Transform)

NDT matching involves several steps:

  1. divide the search space of the precreated map into a grid (normal distribution of the point cloud in each grid)
  2. calculate the mean and variance in the grid
  3. find the element corresponding to the point cloud of the input scan
  4. calculate the evaluation value
  5. update the coordinates of the point cloud of the input scan using the Newton method
  6. repeat the above for all input scans repeat for all input scans
High-resolution grid map with normal distribution transform algorithm [source]

NDT Matching by Autoware

In this example, we use the LiDAR data of the public nuScenes dataset, and use the following as

  • Pre-created point cloud map
  • Input scan (rosbag file)

Set up the TF in Autoware

You can start Autoware and configure your own TF file from the Map tab.
The frame named points_map is a pre-created point cloud, and lidar_top is set to the point cloud of the input scan file (see how to make nuScenes rosbag files in this article).

<node pkg="tf"  type="static_transform_publisher" name="world_to_map" args="0 0 0 0 0 0 /world /map 10" />
<node pkg="tf" type="static_transform_publisher" name="map_to_points_map" args="0 0 0 0 0 0 /map /points_map 10" />
<node pkg="tf" type="static_transform_publisher" name="velodyne_to_lidar_top" args="0 0 0 0 0 0 /velodyne /lidar_top 10" />

You can also use this command while the simulation is running to check the frame tree in pdf format.

rosrun tf view_frames

At this point, we would like to point out 3 things about NDT matching:

  • The process of NDT matching is a heavy workload, so you can downsample the point cloud data while preserving the features of the point cloud. enable voxel_grid_filter in Sensing tab
  • Enable ndt_matching from the Computing tab, but if the initial pose is not accurate, localization will fail. Carefully choose the initial position and angle.
  • If the direction of the map and the vehicle model are not aligned, change the yaw setting of Baselink to Localizer in the Setup tab and the yaw setting of initial position in ndt_matching.

Finally, all the settings are complete and the autonomous vehicle successfully localizes. In the following video, you can see how the scan data is matched with the pre-created map.

Result of the localization by NDT matching with Autoware

This article was brought to you by yodayoda Inc., your expert in automotive and robot mapping systems.
If you want to join our virtual bar time on Wednesdays at 9 pm PST/PDT, please send an email to talk_at_yodayoda.co, and don’t forget to subscribe.

--

--