NDT Matching

David Silver
Self-Driving Cars
Published in
3 min readJul 12, 2017

In the final project of the Udacity Self-Driving Car Nanodegree Program students build code to drive Udacity’s very own self-driving car.

As with almost any type of computer programming, however, we’re not starting from scratch. There are existing operating systems and middleware and libraries that students will get to build on to drive the car.

One of these libraries is Autoware, which is an open-source self-driving car library maintained by Tier IV. We use Autoware particularly for its localization functions, which use our lidar data and a high-definition lidar map to figure out where our vehicle is in the world.

The specific localization algorithm that Autoware uses is called normal distributions transform (NDT) matching, which was originally developed by Peter Biber at the University of Tubingen. NDT is a little different than the particle filter localization we’ve worked with previously, so I’ve spent time over the last few days reviewing how it works.

Localization

In order to figure out where we are in the world, we’ll probably use a map. There’s a whole branch of localization called simultaneous localization and mapping (SLAM), where we figure out how to navigate without a map, but that’s difficult. It’s easier just to have a map and so we’ll assume we have one.

This is a lidar point cloud map of the Udacity parking lot. Tilted on an angle.

In order to figure out where we are in the world, we take our own lidar scan and compare what we see to this map. You can basically imagine that we line up points and try to figure out, given what our current laser scan shows, where are we in this map?

One problem: our points will probably be a little off from the map. Measurement errors will cause points to be slightly mis-aligned, plus the world might change a little between when we record the map and when we make our new scan.

NDT matching provides a solution for these minor errors. Instead of trying to match points from our current scan to point on the map, we try to match points from our current scan to a grid of probability functions created from the map.

A probability density function.

We break the point cloud map into three-dimensional boxes essentially assign a probability distribution to each box. The image above is actually a 2D probability function, but we can make a 3D function following the same principles.

This way, if we detect a point a few millimeters away from where the map thinks a point should be, instead of being completely unable to match those two points, our NDT matching function connects our detected point to the probability function on the map. There’s a kind of “near match”.

For anybody who’s taken Udacity’s lessons on particle filters, or studied them elsewhere, there is a whole separate issue of monte carlo randomization that particle filters use. It seems like that could be applied to NDT matching in pretty much the same fashion, and indeed there is a paper called “Normal distributions transform Monte-Carlo localization (NDT-MCL)” by Saarinen, et al. that seems to work out the details, although I haven’t gone through that in detail.

--

--