0
$\begingroup$

One thing I don't understand for Bayes' theorem is that I end up in a circular position sometimes when I try to solve it. $P(B|C) = \frac{P(C\mid B) \times P(B)}{P(C)}$ However, I don't know $P(C\mid B)$!

If I try to use Bayes' theorem to calculate it, then I end up right back where I started.

I know $P(B)$ and $P(C)$, but they are dependent on a variable $A$ whose probability I also know. How do I get out of the circle if I don't have a prior for $P(C\mid B)$?

  • 0
    So you know P(B | A) and P(C | A) and you want to compute P(B | C, A)? If so, you'll need to either have access to the joint of B and C (with or without A). From the joint, you can compute the conditional since: P(A | B) = P(A, B) / P(B).2011-11-21

2 Answers 2

4

To use this formula, you need to know $P(C|B)$.

0

If you think of Bayes's theorem with the denominator cleared, i.e., $P(B|C)P(C)=P(C|B)P(B)$, you see that it's just two expressions for $P(B\cap C)$. "Applying" it in the customary fraction form (as in the question) amounts to replacing one of these two expressions by the other. So it's not surprising that, if you do it again, replacing the "other" expression by the "one", you just undo what you did the first time.

Furthermore, if you know only $P(B)$ and $P(C)$, you won't be able to "get out of the circle" and find $P(B|C)$ and/or $P(C|B)$, because these are not determined by the information you have. Imagine, for example, flipping a fair coin and letting both $B$ and $C$ be "heads"; so $P(B)=P(C)=\frac12$ and $P(B|C)=P(C|B)=1$. Now consider the same $B$ with a different $C$, namely "tails". You still have $P(B)=P(C)=\frac12$ but now $P(B|C)=P(C|B)=0$.