Tracking cars and pedestrians from an autonomous vehicle

In this blog post we will be going through some of the methodologies autonomous vehicles are using in combination with LIDAR and radar sensors in order to properly model (track) their surroundings (other cars, pedestrians, etc.).

Tracking object and its internal (unobservable) states (like position and velocity) using sensor data is a well researched area outside self-driving cars industry. Same problems are found in guidance, navigation and other control systems. In the 1960s, a set of algorithms was developed (namely Kalman filter and its variations) which were used to model Apollo spacecraft trajectories to the Moon and back. Same algorithms can be applied in an autonomous vehicle in order to detect the position and velocity of the approaching car for example.

Sensor Fusion

Combining data from two or more sensors is called sensor fusion. Sensor fusion enables self-driving car to model its environment.

You might be asking yourself “Why do I need more than one sensor in the first place?”. That’s a really good question.

Generally, none of the sensors will give us a 100% correct observation of the car we are trying to track. All the measurements from the sensors come with the margin of error (noise). Also, not all sensors are working with the same efficiency in all weather conditions. For a more detailed sensor comparison, check out the picture below.

Sensor comparison chart

Kalman filter is solving this problem for us. It is capable of taking all the sensor data (with the included measurement noise) and estimate unobservable state (position and velocity for example) of the tracking object.

Kalman filter in more details

Kalman filter represent a series of prediction and update steps which are executed as the sensor data flows in:

Kalman filter flow for LIDAR and radar sensors

Let’s look at the simplified version of the Kalman filter in more details for the linear motion model of the tracking object using LIDAR data only.

In the prediction step, the current state (x′) is being estimated based on the previous state (x), corresponding motion model (F, also called transition state matrix) and process noise (v):

The above equation for the new state which contains information about position (p) and velocity (v) can be written as follows:

This is equivalent to:

The estimated state itself can be written as a vector:

We are ignoring the process noise (v) for simplification reasons.

We would also need to calculate new covariance matrix (P′) (measure of the estimated accuracy of the state estimate) based on the previous one (P):

In the update step, we are taking the new readings (z) from the sensor and calculating measurement update (y):

H matrix in this case is a transformation matrix (often called measurement matrix) which maps sensor data space into our model space.

The next step is calculating the Kalman gain (K):

Which basically tells us what we should put our trust on, estimated state or sensor measurement:

If Kalman gain is a bigger number, sensor data will impact more on the calculation of the new estimated state. If it’s smaller number, then we’ll rely more on the mathematical model of the estimated state.

Extended Kalman filter

Once we introduce radar data into our Kalman filter flow, things get a little bit more complicated since mapping of the radar data into our estimated model space is done through non-linear functions.

We would need to replace H matrix from:

into something which can translate estimated state into measurement space:

This can be accomplished with the following non-linear vector of mapping from polar to Cartesian coordinates:

Using Teylor series we can approximate a non-linear vector from above with the linear one.

Implementation

C++ implementation of the Extended Kalman filter for LIDAR and radar data can be found on my Github profile: