· Hankyu Kim · Filter  Â· 3 min read

Kalman Filter (Part 2)

This post explains the physical meaning of the error covariance P and the Kalman gain K, showing how the Kalman filter adaptively balances model prediction and sensor measurements.

This post explains the physical meaning of the error covariance P and the Kalman gain K, showing how the Kalman filter adaptively balances model prediction and sensor measurements.

Introduction

In the previous post, we covered the overall structure of the Kalman filter and clarified the roles of
AA, HH, QQ, and RR.

With those variables understood, only two components remain:

  • Error covariance PP
  • Kalman gain KK

Once these are clear, the conceptual understanding of the Kalman filter is complete.


What Is P?

PP is the error covariance.

Despite its intimidating name, its meaning can be summarized in a single sentence:

PP represents the variance of the estimation error assuming a Gaussian distribution.

In other words, PP quantifies how uncertain the current estimate is.


Why Covariance Appears

In the previous post, we assumed:

  • Process noise QQ follows a Gaussian distribution
  • Measurement noise RR follows a Gaussian distribution

If both the system and sensors contain Gaussian noise,
then the estimation error must also follow a Gaussian distribution.

Computing PP is simply the process of tracking the spread of that error distribution.


Covariance Update

The covariance update equation is:

Pk=Pk−−KHPk−P_k = P_k^- - K H P_k^-

Here:

  • Pk−P_k^- is the predicted covariance
  • KK is the Kalman gain
  • HH is the measurement matrix

Loosely speaking, this can be interpreted as:

Pnew=(1−constant)×PpredictedP_{\text{new}} = (1 - \text{constant}) \times P_{\text{predicted}}

This is nothing more than arithmetic.


What Is K?

KK is the Kalman Gain, the core of the Kalman filter.

Once AA, HH, QQ, RR, and PP are known, the Kalman gain is computed as:

K=PH(HPH+R)−1K = P H (H P H + R)^{-1}

Every term in this equation is already defined.

There is no optimization loop here — just matrix operations.


Why Kalman Gain Matters

The Kalman gain determines how much we trust:

  • The model prediction
  • The sensor measurement

This trade-off happens automatically through KK.


Simplified View of K

If we treat the denominator as a constant, the equation becomes:

K=PHHPH+RK = \frac{P H}{H P H + R}

Now the physical meaning becomes clear.


Connection to the State Update

Recall the state update equation:

x^k=x^k−+K(zk−Hx^k−)\hat{x}_k = \hat{x}_k^- + K (z_k - H \hat{x}_k^-)

This can be rearranged as:

x^k=(1−K)x^k−+Kzk\hat{x}_k = (1 - K)\hat{x}_k^- + K z_k

This equation should look very familiar.

It is mathematically identical to a Low Pass Filter.


Interpretation of R (Measurement Noise)

RR represents sensor noise.

From the Kalman gain equation:

  • If RR increases → KK decreases

This means:

When sensor noise is large, the filter trusts the sensor less.

Summary:

R↑  ⇒  K↓  ⇒  less weight on measurementsR \uparrow \;\Rightarrow\; K \downarrow \;\Rightarrow\; \text{less weight on measurements}

This matches physical intuition perfectly.


Interpretation of Q (Process Noise)

QQ represents system uncertainty.

From the covariance prediction:

Pk−=APk−1A+QP_k^- = A P_{k-1} A + Q
  • If QQ increases → PP increases
  • If PP increases → KK increases

Summary:

Q↑  ⇒  K↑  ⇒  more weight on measurementsQ \uparrow \;\Rightarrow\; K \uparrow \;\Rightarrow\; \text{more weight on measurements}

When the model is unreliable, the filter relies more on sensor data.


Why This Is Intuitive

  • High sensor noise → trust the model
  • High system uncertainty → trust the sensor

The Kalman filter encodes this logic directly into its equations.

What appears complex is simply common sense written in matrix form.


Summary

Across two posts, we have covered the Kalman filter completely:

Part 1

  • Overall flow of the Kalman filter
  • Meaning of AA, HH, QQ, and RR

Part 2

  • Meaning of error covariance PP
  • Physical interpretation of Kalman gain KK
  • Direct connection to low pass filtering

If you understand these points, you understand nearly 100% of the Kalman filter conceptually.

The next step is implementation.

In the next post, we will move directly to coding the Kalman filter.


Back to Blog

Related Posts

View All Posts »
Kalman Filter (Part 1)

Kalman Filter (Part 1)

The Kalman filter is an optimal recursive estimator for linear systems, combining system models and noisy measurements through simple matrix operations.

Average Filter

Average Filter

The average filter is the simplest form of recursive estimation. Despite its simplicity, it provides strong noise reduction and forms the foundation of more advanced filters such as the Kalman filter.

Extended Kalman Filter (EKF)

Extended Kalman Filter (EKF)

The Extended Kalman Filter applies the Kalman filter to nonlinear systems by locally linearizing system and measurement models using Jacobians.

Low Pass Filter (LPF)

Low Pass Filter (LPF)

A low pass filter reduces noise while emphasizing recent measurements, overcoming the limitations of uniform averaging in moving average filters.