Kalman Filter — for estimation and tracking.

  • What? To be written.
  • When and Where?

Wherever there is uncertainty. Legit.

Since I’m primarily interested in Perception, a very important task in robotic perception is localization, which maps to tracking (pun unintended on maps people).

Which means, figure out the location of an object at any given point in time ’t’. When you have location at a bunch of different time points ‘t1, t2 … tn’, you have a track.

  • Why is it called a filter?
    Because it recursively takes into account, the noisy estimates, and then it corrects those estimates, thereby effectively filtering out the noise.
  • In essence, two steps — 
    > Predict (Time update)
    > Update (State update)
    If this is going to be an object — a ground moving bot, for an e.g. that is to be tracked, then 
    Predict — A control vector is applied to the bot (an action) which causes it to move (the state space), so if we say that the state space is comprised of position and velocity at time t, then there exists some action (for e.g. movement) that causes a change to the position and velocity at time t+1.
    Update — Our predict step, as previously mentioned, is an estimate of where our robot could be, which means it is not accurate. Our other sensors give measurements that say, “Hey, we know you predicted that this is where the bot should be at this point in time, but from what we’re seeing, this is not completely true so use what we are seeing to make corrections (updates) to your estimations (predicted values).”

References —

  1. As of Aug 2015, a neat explanation — http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ 
    I’m going to have to paraphrase this to see if I get it exactly right.
  2. Something that succinctly puts across Kalman filter’s usage in tracking objects in an image feed. http://www.cs.cornell.edu/Courses/cs4758/2011sp/final_projects/spring_2011/Xu_Chang.pdf