3
$\begingroup$

My question is on Bayesian inference of partitioned multivariate Gaussian. To make things easier, suppose there is a 2-dimensional Guassian,

$$ X_1 \sim N(\mu_1, \sigma^2_1) \\ X_2 \sim N(\mu_2, \sigma^2_2) $$

with covariance $\sigma_{1,2}$.

Suppose we know $\sigma_{i,j}$, $\sigma^2_1$ and $\sigma^2_2$; don't know $\mu_1$, $\mu_2$ but have priors for them as,

$$ \mu_1 \sim N(\theta_1, \delta^2_1) \\ \mu_2 \sim N(\theta_2, \delta^2_2) $$

Now we have an observation $x_1$ for $X_1$. By Bayesian inference we get,

$$ \theta'_1 | x_1 = \frac{\delta^2_1 x_1 + \sigma^2_1 \theta_1}{\delta^2_1 + \sigma^2_1} \\ \delta'^2_1 | x_1 = \frac{\delta^2_1 \sigma^2_1}{\delta^2_1 + \sigma^2_1} $$

and by partitioned Gaussian we have,

$$ X_2 | X_1 \sim N \left(\mu_2 + \frac{\sigma_{1,2}}{\sigma^2_1}(x_1 - \mu_1), \sigma^2_2 - \frac{\sigma^2_{1, 2}}{\sigma^2_1} \right) $$

Finally my question is, how to update the correlated r.v using Bayesian inference, $$ p(\mu_2 | x_1) = \frac{p(x_1 | \mu_2) p(\mu_2)}{p(x_1)} $$ since I don't know how to deal with $p(x_1 | \mu_2)$. Or maybe there's other ways around to get it? Hope you get the idea of what I'm trying to do.

Thanks!

  • 0
    Or, from $X_2|X_1$ we have, $$\mu_2|x_1 = \mu_2 + \frac{\sigma_{1,2}}{\sigma^2_1}(x_1-\mu_1)$$. I'm not sure how to carry on from this to get p.d.f of $\mu_2|x_1$ either. Thanks!2012-03-16
  • 0
    For your $2$-dimensional Gaussian, you neglected to specify the correlation between $X_1$ and $X_2$.2012-03-16
  • 0
    @MichaelHardy I said "with covariance $\sigma_{i,j}$".2012-03-16
  • 0
    but you did not say "we know" the covariance2012-03-16
  • 0
    @Henry Sure. Thanks. Edited.2012-03-16
  • 0
    Also asked on stats.SE?2012-03-17
  • 0
    You've given the conditional distribution of $(X_1,X_2)$ given $(\mu_1,\mu_2)$ and the prior (or marginal) distribution of $(\mu_1,\mu_2)$. It would make sense to ask about the posterior distribution of $(\mu_1,\mu_2)$ given $(X_1,X_2)$, but not about the posterior distribution of $(\theta_1,\delta_1)$. The way you've phrased it, $\theta_1$ and $\delta_1$ are known.2012-03-17
  • 0
    @DilipSarwate Yes, after finding it has "normal-distribution", "bayesian" tags which suggest more relevant topic.2012-03-18
  • 0
    @MichaelHardy why do you think inspecting $p(\mu_2|x_1)$ doesn't make sense?2012-03-18
  • 0
    @shuaiyuancn : It does make sense, and I said so. It's something else that doesn't make sense.2012-03-18
  • 0
    @MichaelHardy I think you got it wrong. I didn't ask about the posterior distribution for $(\theta_1, \delta_1)$; instead I was asking about posterior distribution of $(\mu_1, \mu_2)$, which should be Normal, taking $(\theta_1, \delta_1, \theta_2, \delta_2)$ as parameters.2012-03-18
  • 0
    Then I think you got it wrong. You wrote: $\theta'_1 | x_1 = \frac{\delta^2_1 x_1 + \sigma^2_1 \theta_1}{\delta^2_1 + \sigma^2_1} \\ \delta'^2_1 | x_1 = \frac{\delta^2_1 \sigma^2_1}{\delta^2_1 + \sigma^2_1}$ That is a statement about the posterior distribution of $\theta_1'$ and $\delta_1'^2$.2012-03-18
  • 0
    @MichaelHardy No. Writing $\theta'_1|x_1$ simply means the updated $\theta_1$ given observation $x_1$. $\theta_1$ is known, which you're also aware of, so it has no distribution at all. Look at the r.h.s of these two equations, are there any r.v involved?2012-03-19

0 Answers 0