Kalman Filters, Pub/Sub, and Birdwatching

Tianzi Cai
Google Cloud - Community
8 min readDec 10, 2020

Unless you work in certain very specific fields like self-driving car engineering, you may have not heard of Kalman Filters or have largely forgotten what you once learned about them in school. I myself had relegated them to the deep recesses of my mind. Admittedly, I have only really studied Kalman Filters in a nanodegree program with Udacity in self-driving car engineering, and that was three years ago. But recently, work organized a learning day event that let me pursue something of my own interest in the name of learning, and Pub/Sub has released its official C++ client library and now supports ordering. I want to revisit Kalman Filters and introduce you to Pub/Sub with Kalman Filters, using a favorite pastime of mine — birdwatching — as an aid. I hope using birding in the mix will make it easy and interesting for you to follow along. Let’s get started!

What are Kalman Filters?

Kalman Filters (basic, extended, unscented, etc.) are algorithms for predicting variables in a dynamic system using continuous and imperfect measurements. Kalman Filters require us to know the underlying mathematical models that define the dynamic system and assume that the errors in the measurements follow Gaussian distributions.

How important are fresh measurement data in Kalman Filters? Very. Unlike time series forecasting models that must train using a large window of data with trend and seasonality, Kalman Filters bootstrap themselves with a single measurement and perform self-update with each new measurement. They keep up with the high velocity of input data by being extremely light-weight, and they base their accuracy on the constant and reliable flow of the input data.

As such, Kalman Filters excel at predicting the positions and motions of moving objects in real time using sensor data as in a self-driving car. The sensor data in a self-driving car can be a combination of sensor data from LiDAR (Light Detection And Ranging) and RADAR (RAdio Detection And Ranging) sensors, which can hold the positions and velocities of a nearby vehicle. When the self-driving car receives these sensor data, Kalman Filters use the sensor data to predict the vehicle’s position and velocities at the next timestamp, which feed the self-driving car’s other modules such as path-planning, and in the process also update the filters so as to prepare for the next round of sensor input.

Knowing how different Kalman Filters work requires a fair amount of understanding in matrix multiplication and statistical models. But if you are comfortable with knowing the basics and feel motivated to develop an intuition about the steps involved, imagine you have a pair of binoculars and are birdwatching in nature. Through your binoculars, your eyes scan treetops for winged and feathered creatures. Just when you find one, it flips its wings and jumps onto another branch. Your brain suddenly whirrs away, processing the information as fast as it can to carve out an area in your visual field where the bird has possibly landed. You aren’t too bad at it to begin with, but as you keep your eyes open and keep paying attention, you keep getting better.

I took this picture of a house sparrow in April 2017 when I was studying in the Udacity program

The birding scenario illustrates two computational phases shared by all Kalman Filters: prediction and measurement update. Assume x is a state vector representing the unknown variables in a system that we want to predict, and P is the covariance matrix of x. For example, x can be the positions ( px, py) and velocities ( vx, vy) of a moving object along the x and y axis of a 2-D world, and its covariance matrix P tells us the covariance between any pair of elements in x. That is, whether the variables are positively or negatively related. To make a prediction on x’ and P’, we apply the equation — distance equals velocity times time — to obtain x’, then we apply the formulas for covariance to obtain P’ and add some noise. This prediction phase is like the brain guessing the rough location of a bird that you are tracking.

Coming to the measurement update phase, imagine that you see the branch where the bird has landed. Your brain must use this information to adjust the innerworks of its estimation process accordingly. How much to adjust? The Kalman Gain ( K) is how much.

K = Error in the estimates / ( Error in the estimates + Error in the measurements)

The Kalman Gain can be thought of as the ratio of the error in the estimates over the sum of the error in the estimates and the error in the measurements. When the error in the estimates is small relative to the error in the measurement, the Kalman Gain is small, a small portion of the difference between the measurements ( z) and the previous estimates ( H・x') will contribute to the updated state vector x = x' + K・(z — H・x'). Here, H is a state transition matrix that lets us discard elements in the state matrix that we don’t have measurements for. For instance, we may be able to measure positions but not velocities, in which case we need a state transition matrix that can map the state vector that contains four elements ( px, py, vx, vy) onto the measurement space that contains just two elements ( px, py).

In contrast, when the error in the estimates is large relative to the measurement error, the Kalman Gain is close to 1 / H as given by K = P'・Hᵀ / (H・P'・Hᵀ + R), where Hᵀ is H transposed. This means that we can put more trust in the measurements ( z) because K will almost filter x’ out in x ≈ x' + 1 / H・(z — H・x').

Like our eyes, measurement instruments in the real world aren’t perfect. However, we can predetermine the error in the measurements ( R) by putting some reasonable bounds around the error. To update the covariance matrix P, we use P = (I — K・H)・P' where I is an identity matrix.

With each round of new sensor input, we predict and we update. At the end of the prediction phase, we obtain new estimates for the state vector, x’ and its covariance matrix, P’. At the end of the measurement update phase, we obtain new x and P. The cycle continues with robust self-correction as long as the input data feed is alive.

Pub/Sub with Kalman Filters

This brings me to Pub/Sub. In my homework from 2017, I implemented an extended Kalman Filter to predict the positions and velocities of a moving vehicle. I read fake laser and radar data from a text file. I used a while loop to read the input file line by line and made predictions using each line of input. I wrote the estimates to a local file. I also stored the estimates in an array to calculate Root Mean Squared Error (RMSE) at the end. This kind of setup is very common in a classroom setting and is great for learning, but instead of reading sensor data from a file, can it be done with Pub/Sub to emulate streaming data in the real world?

If you haven’t heard of Pub/Sub, it is a scalable managed messaging service on Google Cloud Platform (GCP). It offers low latency and high throughput publishing and subscribing at scale (see quota limits). It guarantees at least once delivery of messages. It also supports one-to-many, many-to-one, and many-to-many relationships between publisher and subscriber clients.

If I could explain all of this to my 2017 self who knew nothing about Pub/Sub back then, I would tell her something like this —

Imagine the sample data that Udacity uses to grade my homework is not a file that I download from GitHub but a real streaming source in a different part of the world. My instructors have decided to publish the data 24/7 to a Pub/Sub topic. To test the code for an extended Kalman Filter, every student, no matter where we are in the world, must create a Pub/Sub subscription for this topic to get the same copy of the data that the instructors publish. As before, my homework will still be graded based on how close my estimates are compared to the ground truth using RMSE. Bonus points for me too if my program can keep up with the rate that the data is published to the topic.

This made-up homework would have been impossible to do in C++ in 2017 because Pub/Sub did not have a public-facing C++ client library then. In addition, the lack of ordering guarantee would require a non-trivial amount of engineering work to deal with out-of-order data. Luckily, both constraints are not true now. I followed the C++ development environment setup instructions to install and import the Pub/Sub C++ client library and refactored my old code to read from a Pub/Sub subscription instead of an input file.

Here is a breakdown of my subscriber code inside a lambda named session in main.cpp.

  1. session receives a Pub/Sub message with fake laser and radar data.
  2. It calls PreprocessPackages() to preprocess the message.
  3. It calls ProcessMeasurement() to predict and update.
  4. It call CalculateRMSEContinuous() to evaluate the EKF.
  5. It acknowledges the message.
  6. It expires after 10K messages have arrived or 30 seconds have passed, whichever comes first.

I also included a Python script and a bash script to show how to publish mocked LiDAR and RADAR data with an ordering key. The use of an ordering key ensures that any data published with the key are delivered in the order that they are received by Pub/Sub. In case data are published to Pub/Sub out-of-order, which could happen if a sensor has malfunctioned or multiple publisher clients are used, I make sure to dismiss the late data by comparing timestamps in FusionEKF.cpp. When I run my program, I see it spit out new sensor data, new estimates, and new RMSE in real time.

I just went over how Kalman Filters work and how to use Pub/Sub with Kalman Filters in this article. The code example shows how to use an extended Kalman Filter with a Pub/Sub subscriber client written in C++. The output you see above is entirely reproducible by following instructions in this GitHub repo. I hope that you will have the chance to check it out, and like me, when thinking of Kalman Filters and Pub/Sub next time, you will not only think of real-time streaming and prediction, but also a little fun birdwatching. Thanks for reading!

--

--

Google Cloud - Community
Google Cloud - Community

Published in Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Tianzi Cai
Tianzi Cai

Written by Tianzi Cai

Developer Relations Engineer, Google Cloud Pub/Sub