0
$\begingroup$

We are given that $Y$ is a Gamma$(a,b)$ random variable and $X|Y=y$ is Poisson$(y)$. We are asked to show that $Y|X=x$ is Gamma$(s+x,b+1)$.

So far I have tried to find the joint cdf of $X,Y$, but I failed to do so, and I also doubt that this is the method I ought to use.

I have also tried to treat this problem as a manifestation of the gamma/poisson bayesian model, but that didn't seem to elucidate the problem much for me.

Pointers would be appreciated.

2 Answers 2

3

This is a simple hierarchical model. To get the conditional distribution $Y \mid X = x$, we note $$f_{Y \mid X}(y) = \frac{\Pr[X = x \mid Y = y]f_Y(y)}{\Pr[X = x]}.$$ The denominator, the unconditional (marginal) probability mass function of $X$, is not a function of $y$, thus we do not need to compute it to recognize the form of the likelihood in the numerator. Indeed, we can remove all constants of proportionality with respect to $y$: we simply have $$\begin{align*} f_{Y \mid X}(y) &\propto e^{-y} \frac{y^x}{x!} \cdot \frac{b^a y^{a-1} e^{-by}}{\Gamma(a)} \\ &\propto y^{a+x-1} e^{-(b+1)y}. \end{align*}$$ The other factors, $x!$, $b^a$, and $\Gamma(a)$, are not functions of $y$. What remains is very clearly proportional to a gamma distribution with shape $a+x$ and rate $b+1$; i.e., the conditional density is $$f_{Y \mid X}(y) = \frac{(b+1)^{a+x} y^{a+x-1} e^{-(b+1)y}}{\Gamma(a+x)}.$$

  • 0
    I don't understand how you arrive at your first statement.2017-02-22
  • 0
    @Ceph it is simply Bayes' theorem.2017-02-22
  • 0
    I have seen Bayes' theorem only as describing the conditional probability of events. Is it appropriate to interpret $f_Y(y)$ as $Pr(Y=y)$? (If not, how else can one arrive at your statement of Bayes' theorem from the version that has $Pr(A|B)=Pr(B|A)Pr(A)/Pr(B)$)? I thought this was not appropriate, which is why I couldn't understand how to approach the problem.2017-02-22
  • 0
    If you don't like thinking about that formula as Bayes' theorem, then surely you would agree that the **joint** distribution and conditional distributions are related via the formula $$f_{Y \mid X}(y) \Pr[X = x] = f_{X,Y}(x,y) = \Pr[X = x \mid Y = y]f_Y(y).$$ The fact that $X$ is discrete and $Y$ is continuous only changes my choice of notation, not the underlying concept.2017-02-22
  • 0
    It is precisely because the final equality in your statement was not obvious to me that I wrote up this question. I know that similar statements are true when $X,Y$ are both discrete, or both continuous. I do not see how to show that this generalizes to a combination of discrete and continuous random variables.2017-02-22
0

If we let $p$ denote density/mass functions, then we have $p_{Y|X}(y|x)=\frac{p_Y(y)p_{X|Y}(x|y)}{p_X(x)}$. You know all of the RHS terms except for $p_X(x)$, which you can calculate if you're familiar with compound distributions. It turns out $X$ is a negative binomial distribution (Wiki link), and by eyeballing the mean and variance the parameters should be $r=a$, $p=\frac{b}{1+b}$. Putting this all together and remembering that the binomial coefficient $\left(\begin{array}{c} k+r-1 \\ k\end{array}\right)=\frac{\Gamma(k+r)}{\Gamma(k+1)\Gamma(r)}$. After a quick sketch I have a $\frac{b^a}{b^x}$ term that should just be 1 (maybe $p=\frac{1}{1+b}$? That would fix it), but everything else slots into place.

  • 0
    The computation of $p_X$ is not useful and should be skipped. This is a general theme of the subject.2017-02-22