0
$\begingroup$

I am studying the Kalman filter algorithm but i can't understand one point. The k factor has to be chosen in order to minimize the variance of the signal. This lead to following equation:

$k=\frac{\sigma^2_f}{\sigma^2_f+\sigma^2_o}$

where $\sigma^2_f$ is the variance of the forecast signal and $\sigma^2_o$ is the variance of the observed signal. I don't understand why they in general have to be different? Why $\sigma^2_f\neq \sigma^2_o$ ? Why the distributions of the two variables (forecast and observed) are different?

  • 0
    Just a note on wording: "The $k$ factor" is more commonly known as the "Filter Gain".2012-04-15

1 Answers 1

0

I assume you mean a one-dimensional Kalman Filter, since otherwise the relation you stated is more complex (A covariance matrix instead of the scalar variance $\sigma$).

In general, you may have more then one measurement, and the variance of each variable will of course be different (e.g. machines with different measurement inaccuracies). The variance in the prediction of the state is expected to be smaller than the measurement variances, since we now know some more information - We know the result of the previous prediction as well as a variety of measurements, which increase our confidence (reduce the variability) of our next prediction.