Kalman filter equations and extended Kalman filter
Kalman filter
Kalman filter estimates the state of some quantities by combining two kinds of information: a dynamic model that describes our idea of how the world behaves, and measurements that provide noisy estimates of the state. The principle is to iterate the following steps:
- Take measurement $z[n]$.
- Estimate the current state $x[n,n]$ based on the measurement and the values predicted by the dynamic model.
- Predict the next state $x[n+1,n]$ based on the dynamic model equations.
Measurements
Each measurement $z[n]$ has some uncertainty. Kalman filter assumes that the measurement error is distributed normally (Gaussian distribution). We need to know the error variance $r$ from the specifications of the sensor or through calibration. Usually we measure multiple quantities, and it’s convenient to store them in a vector. Then the measurement error is specified as a covariance matrix $R$.
In some applications we want to first transform the measurements into appropriate outputs. For example, we may want to combine the measurements from multiple sensors that measure the same physical quantity. In the standard Kalman filter, each output is a linear combination of the measurements, meaning that the transform can be represented as a matrix $H$ that we call the observation matrix.
State update equation
State update equations describe how to combine the measurements and the estimate that is obtained using our dynamic model to producee a better state estimate.
The current state could be estimated as the average of all the previous measurements, but we don’t want to store all the previous measurements. It’s possible to derive the following recursive equation:
This is a weighted sum of the measurement and $x[n,n−1]$, which is our prediction of what the measurement would be. Here the measurement is weighted by $1/n$. We make this value a time-dependent variable and call it the Kalman gain $K[n]$. The state update equation interpolates between the predicted value and the measurement.
Kalman gain will be computed based on uncertainty in our measurements. A low value will smooth the uncertainty. A high value gives more weight to the measurement.
State extrapolation equation
The state extrapolation equation predicts the next state according to our dynamic model. For example, a one-dimensional accelerating movement can be modeled using three variables:
It’s convenient to place the variables in a vector. In the standard Kalman filter, the dynamic model has to be linear, meaning that it’s possible to represent the state extrapolation equation as matrix multiplication. One-dimensional accelerating movement can be described by the following state transition matrix:
We’ll make the state extrapolation equation more generic by including control input $u[n]$. This vector may contain other information that we don’t predict using the dynamic model, for example steering of the vehicle.
Kalman gain
Our predictions have uncertainty too, since our dynamic model is just an approximation. Kalman Filter provides an estimate for the prediction error, $p$ (or generally, the covariance matrix $P$), which depends on the time instance and is updated using the covariance equations. It can be initialized to the variance of the initial estimate of $x$.
The current Kalman gain is large (more weight to the measurement), when the prediction error is large, or the measurement error is small.
In matrix form, when including the observation matrix, the equation becomes more complex:
Covariance extrapolation equation
In the same way that we extrapolate the state using the dynamic model, we extrapolate the prediction error using our uncertainty in the dynamic model. This uncertainty can be described, for example, using process noise variance $q$:
Set the process noise variance to a low value, e.g. 0.0001, if the model is accurate. If the model is not accurate (for example, modeling an accelerating motion with a constant velocity), a too low value will cause lag error. With multiple variables, we describe the process noise as its covariance matrix $Q$.
Covariance update equation
The prediction uncertainty is updated based on the Kalman gain. The larger the Kalman gain, the smaller we’re going to make our next estimate.
The matrix form considers the observation matrix.
Extended Kalman filter
Extended Kalman filter is an extension of this concept for nonlinear dynamic model and output functions.
- Instead of transforming the state using the matrix $F$, use a possibly non-linear state transition function: $x[n+1,n] = f(x[n,n], u[n])$
- Instead of transforming the measurements using the matrix $H$, use a possibly non-linear output function: $z = h(x)$
The idea is to linearize the dynamic model and output function around each estimate. This means numerically finding the partial derivatives of the functions at the estimate, and using the Kalman filter equations as if the functions were linear.
- Replace $H$ in the covariance update and Kalman gain equations with the first derivative, or the Jacobian, of $h$.
- Replace $F$ in the covariance extrapolation equation with the first derivative, or the Jacobian, of $f$.
Comments