0
$\begingroup$

Bob wants to test a possibly biased coin's probability $p$ of getting heads. His prior prediction is that $p \sim \mathrm{Uniform}(0, 1)$.

After getting all heads on 2 flips of the coin, why should the probability distribution change to $\mathrm{Beta}(3, 1)$? How is the Beta distribution relevant?

  • 0
    To me the question seem well formed. From what I understood, the question is equivalent to: you generate a random number $p$ between 0 and 1 uniformly. Now you flip the coin that generate heads with probability $p$ two times. What is the distribution of $p$ over the coins that result in $2$ heads? (if your initial $p$ was close to 0, only few times end with $HH$, so on the final distribution should be less represented)2012-11-15

2 Answers 2

0

Fixing $p$, what is the probability of obtaining $HH$? $P(HH | p) = p^2$

What's the probability of getting p with the initial distribuition? $f(p) = 1$ (this is the probability density function)

What's the total probability of getting HH? $P(HH) = \int_0^1{P(HH|p)f(p)dp} = \int_0^1{p^2dp}=[1/3p^3]_0^1=1/3$

And then, using Bayes' theorem: $f(p|HH) = \frac{P(HH|p) * f(p)}{P(HH)}=\frac{p^2}{1/3}=3p^2$

You can check that this is exactly the probabily density function of $B(3,1)$. More generally, if you obtain $h$ heads and $t$ tails starting with uniform distribuition, the probability distribuition become $B(h,t)$.

1

check http://www.kris-nimark.net/pdf/BayesianIntroSlides.pdf pages 28 and up

you're trying to evaluate $p$ in $[0,1]$. The Beta is a convenient prior distribution.