1
$\begingroup$

For instance, given:

$p(y) = G(y|0, I)$

$p(x|y) = G(x|By, \Sigma)$

$p(x)=G(x|0, BB^T+\Sigma)$

where $x, y$ are $m(m>1)$ dimension vectors.

Now I want to measure $p(y|x)$. One way to calculate is to substitute the pdf above into $p(y|x)=\frac{p(y)p(x|y)}{p(x)}$ . But I wonder is there any faster way to do that? Thanks!

1 Answers 1

0

One quick way to do this is to use the fact that the posterior is also Gaussian and therefore will have the following closed form solution:

(To make the expression more general, let us assume that the prior has mean $M$ and standard deviation $S$) $$p(y|x) = G(y|(S^{-1} + B^T \Sigma^{-1} B)^{-1}(S^{-1}M + B^T \Sigma^{-1}X), (S^{-1} + B^T \Sigma^{-1} B)^{-1}) $$ Where $X$ is a matrix such that $[x_1, x_2, ..., x_n]$

The general solution can be adapted to your case by setting $M=0$ and $S=I$ so the closed form solution becomes: $$p(y|x) = G(y|(I + B^T \Sigma^{-1} B)^{-1}(B^T \Sigma^{-1}X), (I + B^T \Sigma^{-1} B)^{-1}) $$

If you want to get the posterior in terms of the maximum likelihood (or least squares) estimate and prior mean, then you will have to substitute X with $By_{ML}$ where $y_{ML}$ is the maximum likelihood estimate. So the closed form solution is: $$p(y|x) = G(y|(I + B^T \Sigma^{-1} B)^{-1}(B^T \Sigma^{-1}By_{ML}), (I + B^T \Sigma^{-1} B)^{-1}) $$ where $y_{ML}=(B^TB)^{-1}B^TX$

As a side note, when dealing with matrices, the normal distribution is often given in terms of the inverse of the covariance matrix so the expression for the posterior probability can be written without inverting any of the covariance matrices.