0
$\begingroup$

There is a 1-dimensional Gaussian random variable $x$ with $P(x) = \mathcal{N}(x\ |\ 0,1)$, where $0$ is the mean and $1$ is the variance.

There is a binary random variable $Y$ with $P(Y = 1|x) = \begin{cases} 0.9 \text{ if } x > 0\\ 0.1 \text{ otherwise} \end{cases}$

With rejection sampling I should compute a sample set representing the posterior $P(x|Y=1)$. From this I should compute an estimate of the posterior mean $\int_x x P(x|Y=1)$.


I am stucked in the first part:

For sampling I do the following, since $Y$ depends on $x$ and is observed I compute $P(x)$ first:

  1. I generate a random number between $-1$ and $1$ and apply it to my Gaussian function
  2. I use this result of $P(x)$ as input for $P(Y=1|x)$ in order to get the probability
  3. I generate another random $r$ between $-1$ and $1$ and add the boolean value $r < P(Y=1|x)$
  4. If the returned sample has returned $false$ in step 3) I reject it

I am not sure if this is the right approach. I have done Sampling on binary random variables before, but not mixed with a Gaussian. After these steps, I don't know how to compute the posterior, because normally I count the samples where the variable I am looking for has the value for which I am looking and divide it through the number of samples.

But this time I am confused, since the variable I am looking for is not discrete like the value of a binary random value, but continuous, so for what values should I look out?

Can someone give me ideas on that?

1 Answers 1

4

Note that, $\int_x xP(x|Y = 1) dx$ is nothing but $\mathbb{E}_{P(X|Y = 1)}[X]$, which can be approximated as $\frac{1}{m} \sum_{i = 1}^{m} x^i$, where $x^i$ is the $i^{th}$ sample from the $P(x|Y = 1)$ distribution.

  1. Generate a gaussian distributed random number
  2. if (x is positive), generate Y ~ bernoulli(0.9) else generate Y ~ bernoulli(0.1)
  3. if Y = 1, retain sample else reject sample

Take all the retained samples, add the corresponding x's and average them.

  • 0
    1. Its just regular expectation. The approximation is just a regular monte carlo approximation. 2. The way you described generating gaussian random numbers doesn't look right. Have you come across [box-muller transform](http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform)?2012-05-28