Video length is 8:36

An Optimal State Estimator Algorithm | Understanding Kalman Filters, Part 4

From the series: Understanding Kalman Filters

Melda Ulusoy, MathWorks

Discover the set of equations you need to implement a Kalman filter algorithm. You’ll learn how to perform the prediction and update steps of the Kalman filter algorithm, and you’ll see how a Kalman gain incorporates both the predicted state estimate (a priori state estimate) and the measurement in order to calculate the new state estimate (a posteriori state estimate).

Download this virtual lab to study linear and extended Kalman filter design with interactive exercises.

Published: 1 May 2017

In this video, we'll discuss the set of equations that you need to implement the Kalman filter algorithm. Let's revisit the example that we introduced in the previous video. You join a competition to win the big prize. You're asked to design a self-driving car that needs to drive one kilometer on 100 different terrains.

In each trial, the car must stop as close as possible to the finish line. At the end of the competition, the average trial position is computed for each team, and the owner of the car with the smallest error variance in an average trial position closest to 1 kilometer gets the big prize.

In that example, we also showed the car dynamics and the car model for our single-state system, and we discussed process and measurement noises along with the covariances. Finally, we said that you could win the competition by using a Kalman filter, which computes an optimal, unbiased estimate of the car's position with minimal variance.

This optimal estimate is found by multiplying the prediction and measurement probability functions together, scaling the results, and computing the mean of the resulting probability density function. Computationally, the multiplication of these two probability density functions relates to the discrete Kalman filter equation shown here.

Does this ring a bell? Doesn't it look similar to the state observer equation that we discussed in previous videos? Actually, a Kalman filter is a type of state observer, but it is designed for stochastic systems. Here is how the Kalman filter equation relates to what we've discussed with the probability density functions.

The first part predicts the current state by using state estimates from the previous timestep and the current input. Note that these two state estimates are different from each other. We'll show the predicted state estimate with this notation. This is also called the a priori estimate, since it is calculated before the current measurement is taken.

We can now rewrite the equation like this. The second part of the equation uses the measurement and incorporate it into the prediction to update the a priori estimate. And we'll call the result the a posteriori estimate. You want to win the big prize, right? Then these are the equations you need to run on your car CCU.

Looks a little scary? What if we turn everything upside down? Doesn't change much, does it? Okay, we'll go over the algorithm equations step by step. The Kalman filter is a two-step process. Let's first start with the prediction part. Here, the system model is used to calculate the a priori state estimate and the error covariance p.

For our single-state system, p is the variance of the a priori estimate, and it can be thought of as a measure of uncertainty in the estimated state. This variance comes from the process noise and propagation of the uncertain x hat k minus 1.

At the very start of the algorithm, the k minus 1 values for x hat and p come from their initial estimates. The second step of the algorithm uses the a priori estimates calculated in the prediction step and updates them to find the a posteriori estimates of the state and error covariance.

The Kalman gain is calculated such that it minimizes the a posteriori error covariance. Let this bar represent the calculation of x hat k. By weighing the correction term, the Kalman gain determines how heavily the measurements and the a priori estimates contributes to the calculation of x hat k.

If the measurement noise is small, the measurements is trusted more and contributes to the calculation of x hat k more than the a priori state estimate does. In the opposite case, where the error in the a priori estimate is small, the a priori estimate is trusted more. And the computation of x hat k mostly comes from this estimate.

We can also show this mathematically by looking at two extreme cases. Assume that in the first case, the measurement covariance is 0. To calculate the Kalman gain, we take its limit as r goes to 0. We plug in 0 for r and see that these two terms cancel each other out.

As r goes to 0, the Kalman gain approaches to the inverse of c, which is equal to 1 in our system. Plugging k, an inverse of c, into the a posteriori set estimate shows that x hat k is equal to yk. So the calculation comes from the measurement only as expected.

Now, if we update our plot, we can show the measurement with an impulse function, which is shown with this orange vertical line. Note that the variance in the measurement is 0, since r goes to 0. We found that the a posteriori estimate is equal to the measurement, so we can show it by the same impulse function.

On the other hand, if the a priori error covariance is close to 0, then the Kalman gain is form as 0. Therefore, the contribution of this term to x hat k is ignored, and the computation of x hat k comes from the a priori state estimates. On the plots, we'll show the a priori state estimate with an impulse function, which has zero variance.

And since the a posteriori estimate is equal to the a priori estimate, we'll show it with the same impulse function. Once we've calculated the updated equations, in the next timestep, the a posteriori estimates are used to predict the a priori estimates, and the algorithm repeats itself.

Notice that to estimate the current state, the algorithm doesn't need all the past information. It only needs the estimated state and error covariance matrix from the previous timestep and the current measurement. This is what makes the Kalman filter recursive. Now that you know the set of equations needed to implement the Kalman filter algorithm, what are you going to do with the big prize when you win the competition?

If you can't decide, here is a suggestion. Note that the Kalman filter is also referred as a sensor fusion algorithm. So you can buy an additional sensor such as an IMU, and experiment to see whether using multiple sensors will improve your self-driving car's estimated position. If you have two measurements, the dimensions of y, c, and k matrices will change as shown here.

But basically, you'll still follow the same logic to compute the optimal state estimate. On the plot, we'll have one more probability density function for the measurement from IMU. And this time, we'll be multiplying three probability density functions together to find the optimal estimate of the car's position.

So far, we had a linear system, but what if you have a nonlinear system and want to use Kalman filter? In the next video, we'll discuss nonlinear state estimators.