# Day 94(DL) — Kalman Filter(contd)

Apr 13 · 3 min read

This blog is the continuation of the previous article that introduced the Kalman Filter. We’ll start with the external influence by taking an example of a self-driving car for better clarity.

External Influence(known): While driving, based on the sensor information there will be control over the movement. One nice example to quote here is Tesla’s autopilot, it automatically issues a sudden brake when a pedestrian unexpectedly crosses. Based on the external activity, certain commands are issued(brake in our case). This auxiliary knowledge can be treated as a correction to the initial prediction we made before.

Now adding the external influence by applying the kinematics rule, we have

new_position(Pk) = old_position(Pk-1) + timetaken(delta t)* old_velocity(Vk-1) + 1/2 a * timetaken(delta)²

new_velocity(Vk) = old_velocity(Vk-1) + a * timetaken(delta)

The highlighted terms denotes the external factor. Rewriting interms of matrix,

External influence(unknown): Sometimes not everything is predictable, uncertainty could be introduced by external forces such as a hailstorm. Such distractions might not be known prior that need to be taken into account.

Already we’ve accommodated the uncertainty in terms of covariance, we will include the additional influence to that as well,

Ck = Fk * Ck-1 * Trans(Fk) + Qk where Qk => unknown noise

New estimate = prediction from previous + external known influence

Xk = Fk * Xk-1 + Bk * u

New uncertainty = prediction from old uncertainty + external unknown influence

Ck = Fk * Ck-1 * Trans(Fk) + Qk

Further distillation: The external information can be from various sources for different variables of interest(position & velocity). Now if we combine all those details, it can be considered as another matrix transformation Hk that gets applied to the New estimate and New uncertainty.

X(expected) = Hk * Xk, C(expected) = Hk * Ck * Trans(Hk)

Since sensors came into the picture, we cannot exclude the noise associated with it. Noise is defined as the gap between the reading we received and the actual intended location. The covariance of the sensor noise is denoted by Rk where the mean of the distribution is z.

Two Gaussian distribution: Now we have two Gaussian distribution at hand (1) mean = X(expected) & covariance = C(expected) (2) mean = z & covariance = Rk. By merging(multiplying) these two Gaussians, we can locate the most likely point. Now the new state will be another Gaussian with mean Xnew and the covariance Cnew.

In order to find this new state, we need to multiply the two Gaussian. We know the formula for single Gaussian distribution is,

Please refer to the below link for the multiplication of the gaussian and the substitution w.r.t our variables.

https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/

## Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To stay up to date on other topics, follow us on LinkedIn. https://www.linkedin.com/company/nerdfortech

Written by

AI Enthusiast

## Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To stay up to date on other topics, follow us on LinkedIn. https://www.linkedin.com/company/nerdfortech

## RNN Numerical Walkthrough + Code

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app