This blog is the continuation of the previous article that introduced the Kalman Filter. We’ll start with the external influence by taking an example of a self-driving car for better clarity.
External Influence(known): While driving, based on the sensor information there will be control over the movement. One nice example to quote here is Tesla’s autopilot, it automatically issues a sudden brake when a pedestrian unexpectedly crosses. Based on the external activity, certain commands are issued(brake in our case). This auxiliary knowledge can be treated as a correction to the initial prediction we made before.
Now adding the external influence by applying the kinematics rule, we have
new_position(Pk) = old_position(Pk-1) + timetaken(delta t)* old_velocity(Vk-1) + 1/2 a * timetaken(delta)²
new_velocity(Vk) = old_velocity(Vk-1) + a * timetaken(delta)
The highlighted terms denotes the external factor. Rewriting interms of matrix,
External influence(unknown): Sometimes not everything is predictable, uncertainty could be introduced by external forces such as a hailstorm. Such distractions might not be known prior that need to be taken into account.
Already we’ve accommodated the uncertainty in terms of covariance, we will include the additional influence to that as well,
Ck = Fk * Ck-1 * Trans(Fk) + Qk where Qk => unknown noise
New estimate = prediction from previous + external known influence
Xk = Fk * Xk-1 + Bk * u
New uncertainty = prediction from old uncertainty + external unknown influence
Ck = Fk * Ck-1 * Trans(Fk) + Qk
Further distillation: The external information can be from various sources for different variables of interest(position & velocity). Now if we combine all those details, it can be considered as another matrix transformation Hk that gets applied to the New estimate and New uncertainty.
X(expected) = Hk * Xk, C(expected) = Hk * Ck * Trans(Hk)
Since sensors came into the picture, we cannot exclude the noise associated with it. Noise is defined as the gap between the reading we received and the actual intended location. The covariance of the sensor noise is denoted by Rk where the mean of the distribution is z.
Two Gaussian distribution: Now we have two Gaussian distribution at hand (1) mean = X(expected) & covariance = C(expected) (2) mean = z & covariance = Rk. By merging(multiplying) these two Gaussians, we can locate the most likely point. Now the new state will be another Gaussian with mean Xnew and the covariance Cnew.
In order to find this new state, we need to multiply the two Gaussian. We know the formula for single Gaussian distribution is,
Please refer to the below link for the multiplication of the gaussian and the substitution w.r.t our variables.