· Hankyu Kim · Filter · 3 min read
Kalman Filter (Part 2)
This post explains the physical meaning of the error covariance P and the Kalman gain K, showing how the Kalman filter adaptively balances model prediction and sensor measurements.
Introduction
In the previous post, we covered the overall structure of the Kalman filter and clarified the roles of
, , , and .
With those variables understood, only two components remain:
- Error covariance
- Kalman gain
Once these are clear, the conceptual understanding of the Kalman filter is complete.
What Is P?
is the error covariance.
Despite its intimidating name, its meaning can be summarized in a single sentence:
represents the variance of the estimation error assuming a Gaussian distribution.
In other words, quantifies how uncertain the current estimate is.
Why Covariance Appears
In the previous post, we assumed:
- Process noise follows a Gaussian distribution
- Measurement noise follows a Gaussian distribution
If both the system and sensors contain Gaussian noise,
then the estimation error must also follow a Gaussian distribution.
Computing is simply the process of tracking the spread of that error distribution.
Covariance Update
The covariance update equation is:
Here:
- is the predicted covariance
- is the Kalman gain
- is the measurement matrix
Loosely speaking, this can be interpreted as:
This is nothing more than arithmetic.
What Is K?
is the Kalman Gain, the core of the Kalman filter.
Once , , , , and are known, the Kalman gain is computed as:
Every term in this equation is already defined.
There is no optimization loop here — just matrix operations.
Why Kalman Gain Matters
The Kalman gain determines how much we trust:
- The model prediction
- The sensor measurement
This trade-off happens automatically through .
Simplified View of K
If we treat the denominator as a constant, the equation becomes:
Now the physical meaning becomes clear.
Connection to the State Update
Recall the state update equation:
This can be rearranged as:
This equation should look very familiar.
It is mathematically identical to a Low Pass Filter.
Interpretation of R (Measurement Noise)
represents sensor noise.
From the Kalman gain equation:
- If increases → decreases
This means:
When sensor noise is large, the filter trusts the sensor less.
Summary:
This matches physical intuition perfectly.
Interpretation of Q (Process Noise)
represents system uncertainty.
From the covariance prediction:
- If increases → increases
- If increases → increases
Summary:
When the model is unreliable, the filter relies more on sensor data.
Why This Is Intuitive
- High sensor noise → trust the model
- High system uncertainty → trust the sensor
The Kalman filter encodes this logic directly into its equations.
What appears complex is simply common sense written in matrix form.
Summary
Across two posts, we have covered the Kalman filter completely:
Part 1
- Overall flow of the Kalman filter
- Meaning of , , , and
Part 2
- Meaning of error covariance
- Physical interpretation of Kalman gain
- Direct connection to low pass filtering
If you understand these points, you understand nearly 100% of the Kalman filter conceptually.
The next step is implementation.
In the next post, we will move directly to coding the Kalman filter.