5
$\begingroup$

In my estimation theory textbook, the following is stated as a reminder without any further explanation:

Consider the gaussian random vector $\bf{z} = \begin{pmatrix}\bf{x} \\ \bf{y}\end{pmatrix}$ with mean $\hat{z} = \begin{pmatrix}\hat{x} \\ \hat{y}\end{pmatrix}$ and covariance matrix $\bf{C}_z = \begin{pmatrix} \bf{C}_{xx} & \bf{C}_{xy} \\ \bf{C}_{yx} & \bf{C}_{yy} \end{pmatrix}$

If a measurement $y^*$ is given, the conditional density of $x$ conditioned on that measurement $f\left(x | y^*\right)$ is gaussian with mean

$\hat{x} + \bf{C}_{xy}\bf{C}_{yy}^{-1} \left( y^* - \hat{y} \right)$

and covariance matrix

$\bf{C}_{xx} - \bf{C}_{xy} \bf{C}_{yy}^{-1} \bf{C}_{yx}$

(Pseudo-inverses replace inverses when necessary)

Having never worked with conditional densities before, I don't see how to derive these formulas, or what the intuition behind them is.

1 Answers 1

3

A detailed proof with carefully presented computations is here, see Part b of Theorem 4.