Building an Autonomous Vehicle Part 4.1: Sensor Fusion and Object Tracking using Kalman Filters

Akhil Suri
5 min readMay 15, 2018

--

Image Source: (Click Here)

A self-driving car needs a map of the world around it as it drives. It must be able to track car, bikes, trucks, pedestrians and other moving objects on the road continuously and to track these movements we take continuous and dynamic measurements from various sensors like Radars, Lidars etc. But there is usually a challenge that sensor measurements are uncertain due to reason which includes, but not are limited to, errors from a sensor, discrepant measurement from multiple sensors. So to get the accurate measurements by reducing the error we use an algorithm knows as Kalman Filters.

Before moving forward on Kalman Filters lets read about some sensors that we use in self-driving cars :)

LIDAR uses a laser for measurement and generates a point cloud of the world around it providing the car with fairly accurate position x and position y values. It is able to detect objects in the vicinity of the car (20–40 m) with very high accuracy. However, LIDAR is not very accurate in poor weather conditions or if the sensor gets dirty. A LIDAR cloud looks like:

Source: (Click Here)

The RADAR, on the other hand, is less accurate but is able to provide an estimate of both position and velocity of an object. The velocity is estimated using the Doppler effect as seen in the image below. RADAR is able to detect objects up to 200 m from the car. It is also less impervious to weather conditions.

Source: (Click Here)

Now you have an idea of what these sensors are, we can move on to Kalman Filters :)

Kalman Filters, also known as linear quadratic estimation (LQE), is an algorithm that helps us to obtain more reliable estimates from sequence of observed measurements(sensor measurements). 😴

It can be used to track the position and velocity of a moving pedestrian over time and also measure the uncertainty associated with them. It is basically a two-step iterative process.

  1. Predict 🤔
  2. Update ✍️
Source: (Click Here)

In Predict step, we predict the new position of a pedestrian 🚶based on the previous position and assuming they are moving with a constant velocity. Also, we predict the uncertainty/error/variance in our prediction according to the process noise present in the system.

In Update step, we take into account the actual measurements 📐 coming from the sensors to update our estimates. To do that we first calculate the difference between our predicted value and measured value and then we decide which value to keep by calculating the Kalman Gain. We then calculate the new value(new belief/position) and new uncertainty/error/variance based on our decision made by Kalman Gain. These calculated values will finally be the output of our Kalman Filter and will be used as an input to prediction step for next iteration.

Now you must be wondering what is Kalman Gain? 😳

Actually, Kalman Gain is a parameter which decides how much weight should be given to predicted value and the measured value. It checks the uncertainty in both predicted value and measured value and then it decides whether our actual value is close to predicted value or measured value.

K = Error In Prediction / (Error in Prediction + Error in Measurement)

Error In Measurement is generally given by the sensor manufacturers. When we buy a new sensor, the manufacturer tells us the standard deviation of the measurement that we’ll get from the sensor. It means, let say the standard deviation is 3 and actual measurement is 150, then the sensor can give us the output ranges from 147–153.

Error In Prediction is calculated mathematically. We initially start with a wrong belief(large error) and then reduces the error gradually(using Kalman Gain) after taking the first few measurements from the sensor.

Source: (Click Here)

Kalman Filter comprises of these set of equations. I can explain these equations here also but I feel it would be better if you go through these amazing video lectures series on Kalman Filters by Michel van Biezen.

Conclusion and Discussion:

To summarize all the equations, you can see the below image. Here you can see that we first predict the cart position and then we combine the measured value with predicted value to get the best estimate. Here the Gaussian Distribution shows the probability of cart on a particular position and the width of Gaussian Distribution tells us the possible error/deviation in the predicted/measured value from the actual value. Wider curve means that the belief can be less accurate whereas the narrower curve tells us that the belief is more accurate. So we use Kalman Filters to reduce the width of these Gaussian Distributions to make sure that our final calculated belief(position) is very accurate.

Taken from somewhere in the internet :)

I would like to thanks Infosys for giving me an opportunity to learn this amazing technology and Udacity for helping me to understand these concepts through their lectures.

You can find my implementation of Kalman Filters and other Self-Driving Car related projects on my GitHub here 👻. Please feel free to provide me any suggestions, corrections or additions in the comments :)

To be continued…. Part 4.2 and 4.3 coming soon 🤞

Edit-1: You can read about Extended Kalman Filters(Part 4.2) here.

Edit-2: Read about Unscented Kalman Filters(Part 4.3) here.

--

--