Building an Autonomous Vehicle Part 4.2: Sensor Fusion and Object Tracking using Extended Kalman Filters

Akhil Suri
4 min readJun 17, 2018

--

Taken from somewhere in the internet :)

This post is in continuation to my last post on Kalman Filter. In that post we talked about improving the accuracy of the position estimates of other objects in surroundings by combining different sensor measurements. But you must have wondered that where exactly are we combining the data of these sensor measurements?

Unfortunately, combining data from different sensors is not easy. It’s not as straightforward for the Radar sensor as it gives readings in Polar coordinates and the Lidar gives data in Cartesian coordinates.

Before moving forward, let me tell you the difference b/w Cartesian and Polar coordinates system.

Cartesian Coordinates System:
The Cartesian coordinate system is a two-dimensional coordinate system using a rectilinear grid. Using Cartesian Coordinates we mark a point by how far along and how far up it is from origin.

Source: (Click Here)

Polar Coordinates System:
The polar coordinate system is a two-dimensional coordinate system using a polar grid. Using Polar Coordinates we mark a point by how far away, and what angle it is making to origin.

Source: (Click Here)

Our mathematical formulas which we are using in Kalman Filters are all implemented with linear functions of type y = ax + b where the constant ‘a’ is called slope and ‘b’ is called y-intercept. Below in the sample graph of a liner function.

Linear Function: (Source)

A Kalman filter always works with linear functions. But most of the real world problems involve non linear functions. In most cases, the system is looking into some direction and taking measurement in another direction. This involves angles and sine, cosine functions which are non linear functions which then lead to problems.

Non Linear Function: (Source)
Source: Udacity SDC Lecture Videos

As I already mentioned that the radar sees the world in polar coordinates. It gives us three measures:

  • ρ (rho): The distance to the object that is tracked.
  • φ (phi): The angle between the x-axis and our object.
  • ρ ̇ (rhodot): The change of ρ, resulting in a radial velocity.

These three values make our measurement a nonlinear given the inclusion of the angle φ. If we enter non-linear data in a Kalman filter, our result is no longer in uni-modal Gaussian form and we can no longer estimate position and velocity.

Source: Udacity SDC Lecture Videos

We can see in above image that the distribution created from the non-linear input is not normal distribution. So our goal here is to convert the data ρ, φ, ρ ̇ to Cartesian data (px, py, vx, vy).

Extended Kalman Filter (EKF) proposes a solution to this problem. The EKF use Taylor expansion to construct a linear approximation of nonlinear function h(x):

Gaussian distribution after applying a first order Taylor expansion

We take a point and perform a bunch of derivatives on that point. In case of an EKF, we take mean of the Gaussian on the Non Linear Curve and perform a number of derivatives to approximate it.

Taylor Series

For every non linear function, we just draw a tangent around the mean and try to approximate the function linearly. Then we use that approximation as an input in our Kalman Filters equations to improve the accuracy of our position estimates.

Though this was a brief overview and may be a little boring 😝but if you want to get deeper 🕵️ into mathematical part of the Extended Kalman Filters then I would suggest you to go through these amazing video lectures series on Kalman Filters by Michel van Biezen 👦.

You can find my implementation of Extended Kalman Filters and other Self-Driving Car related projects on my GitHub here 👻. Please feel free to provide me any suggestions, corrections or additions in the comments :)

To be continued…. Part 4.3 — Unscented Kalman Filters coming soon 🤞

Edit-1: Read about Unscented Kalman Filters(Part 4.3) here.

--

--