Β· Hankyu Kim Β· Filter  Β· 4 min read

Kalman Filter (Part 1)

The Kalman filter is an optimal recursive estimator for linear systems, combining system models and noisy measurements through simple matrix operations.

The Kalman filter is an optimal recursive estimator for linear systems, combining system models and noisy measurements through simple matrix operations.

Introduction

This post introduces the Kalman Filter, one of the most influential tools in control and estimation theory, originally developed by Rudolf Kalman.

For linear systems, the Kalman filter is mathematically proven to be an optimal estimator.
In practice, it is so powerful that for linear systems, using the Kalman filter alone is often sufficient.

Despite its reputation, the Kalman filter is not conceptually difficult.
Most learners give up not because of the idea, but because of the intimidating equations.


A Common Misconception

When people first see the Kalman filter equations, the typical reaction is:

β€œThis looks impossible to understand.”

In reality:

  • The equations are long, but
  • Each symbol has a clear physical meaning
  • The computations are simple matrix arithmetic

Understanding the roles of the variables matters far more than memorizing derivations.


What You Do NOT Need to Do

You do not need to derive the Kalman filter from first principles every time.

Just as we do not re-derive integrals from Riemann sums whenever we integrate,
we use the Kalman filter by understanding:

  • What each term represents
  • How it fits into our system

The focus should be on how to apply it, not on proving it again.


Kalman Filter Structure

If a variable has a hat symbol, it represents an estimate.

  • kk : current time step
  • kβˆ’1k-1 : previous time step

The Kalman filter consists of five simple steps:


Step 1: Initialization

The estimate and error covariance from the previous step become the initial values for the next step.

This highlights a key property:

The Kalman filter is recursive.


Step 2: Prediction of Estimate and Covariance

The prediction equations may look complex, but in practice:

  • Only the covariance PP is being updated
  • AA and QQ are already known system parameters

This step is simply:

  • Multiply by AA
  • Add QQ

Nothing more than arithmetic.


Step 3: Kalman Gain Computation

The Kalman gain determines how much we trust:

  • The prediction
  • The measurement

Again:

  • HH and RR are fixed
  • Only PP comes from the previous step

This step is also just algebra.


Step 4: State Update

The current state estimate is computed using:

  • The predicted state
  • The measured value zkz_k
  • The Kalman gain

This is where the filter blends model-based prediction and sensor data.


Step 5: Covariance Update

The error covariance PkP_k is updated and passed to the next iteration.

This closes the loop and returns to Step 1.


Why the Kalman Filter Is Easier Than It Looks

Among all the variables in the equations:

  • A,H,Q,RA, H, Q, R are given
  • Only PP and KK are computed

Once these matrices are defined, everything else is simple computation.

The perceived difficulty comes from seeing all equations at once.


What Are A and H?

AA and HH come from the state-space representation.

State-space equations:

xΛ™=Ax+Bu\dot{x} = A x + B u y=Cx+Duy = C x + D u

These equations describe:

  • How the system evolves
  • How measurements relate to the state

Example: Constant Velocity Motion

Suppose we want to estimate position under constant velocity such as:

xk+1=Axk+Qx_{k+1} = A x_k + Q zk=Hxk+Rz_k = H x_k + R

Define the state:

xk=[positionvelocity]x_k = \begin{bmatrix} \text{position} \\ \text{velocity} \end{bmatrix}

Then:

A=[1Ξ”t01]A = \begin{bmatrix} 1 & \Delta t \\ 0 & 1 \end{bmatrix}

This represents:

  • Current position = previous position + velocity Γ— time
  • Velocity remains constant

This matrix encodes the physics of the system.


Why A Is the Hard Part

The Kalman filter itself is not difficult.

The real challenge is constructing A, which requires:

  • Physical insight
  • Mathematical modeling
  • Understanding the system dynamics

Once AA is correct, the Kalman filter becomes straightforward.


What Is H?

HH defines what the sensor measures.

If:

xk=[positionvelocity]x_k = \begin{bmatrix} \text{position} \\ \text{velocity} \end{bmatrix}

Then:

  • Measuring position β†’ H=[1β€…β€Š0]H = [1 \; 0]
  • Measuring velocity β†’ H=[0β€…β€Š1]H = [0 \; 1]

The Kalman filter uses the measured variable as a reference to estimate the others.


What Are Q and R?

QQ and RR represent noise.

  • RR: measurement noise
  • QQ: model uncertainty

No sensor is perfect, and no model fully captures reality.

These uncertainties are explicitly included in the equations.


Gaussian Noise Assumption

Kalman filters assume that noise follows a Gaussian distribution.

This is not arbitrary:

  • Many real-world noise sources approximate Gaussian behavior
  • Gaussian noise enables optimal estimation in linear systems

This assumption underlies the optimality of the Kalman filter.


Practical Meaning of Q and R

  • QQ is usually tuned based on experience
  • RR is often provided by sensor manufacturers

They describe uncertainty, not errors to be eliminated.


Summary

  • The Kalman filter is an optimal estimator for linear systems
  • It is recursive and computationally simple
  • AA and HH describe system physics and sensing
  • QQ and RR describe unavoidable noise
  • The complexity is visual, not conceptual

If you understand these points, you already understand about 80% of the Kalman filter.

The remaining 20% lies in understanding covariance PP and Kalman gain KK, which will be covered next.


Back to Blog

Related Posts

View All Posts Β»
Average Filter

Average Filter

The average filter is the simplest form of recursive estimation. Despite its simplicity, it provides strong noise reduction and forms the foundation of more advanced filters such as the Kalman filter.

Kalman Filter (Part 2)

Kalman Filter (Part 2)

This post explains the physical meaning of the error covariance P and the Kalman gain K, showing how the Kalman filter adaptively balances model prediction and sensor measurements.

Low Pass Filter (LPF)

Low Pass Filter (LPF)

A low pass filter reduces noise while emphasizing recent measurements, overcoming the limitations of uniform averaging in moving average filters.

Extended Kalman Filter (EKF)

Extended Kalman Filter (EKF)

The Extended Kalman Filter applies the Kalman filter to nonlinear systems by locally linearizing system and measurement models using Jacobians.