Given $P(A|B \cap C)$, where $B$ is independent of $A$, and $C$ is dependent of $A$. In this case, how should we calculate the probability?
Conditional Probability based on intersection of two sets, independent and dependent respectively
2
$\begingroup$
probability
probability-theory
-
2It is a fact of life that we cannot. Even the (different) hypothesis that B is independent of A and that C is independent of A would not suffice. – 2012-09-15
-
0@did i couldnt understand ur comment?( – 2012-09-15
-
0@SeyhmusGüngören Which part? – 2012-09-15
-
0@did why one cannot calculate a probability given some dependence independence conditions? I hope my questions dont make u bored. – 2012-09-15
-
0@SeyhmusGüngören Because the dependence structure is not fully specified at all! The fact that C and A are dependent is no information at all hence we are left with the hypothesis that A and B are independent, which is by and large insufficient to deal with (A,B,C). – 2012-09-15
-
0@SeyhmusGüngören I don't see what did is addressing with his comment. Certainly you can show equalities that would enable you to calculate the probability on the left side using the probabilities on the right if they are known. Is he simply saying that in addition to the assumptions you have to know what P{A∩C|B] and P[C] equal? That would be sort of obvious. I think the formula I gave in my answer address the OPs question. – 2012-09-15
-
0@MichaelChernick I think he means that without knowing the full dependence independence structure, we can not define the given probability in terms of elementary events. I think he doesnt talk about simplifying the equation. For example in the OP we have $P(A|B\cap C)$ and in your simplified formulation $P(A\cap C|B)$ perhaps both are equally unknown. – 2012-09-15
-
1I really don't know what did was thinking. Maybe he will eventually tell us. But he has a penchant for just making snide remarks. – 2012-09-15
-
0It is sometimes useful to construct identities like this. It could happen that you have done experimentation to estimate for P(A∩C|B) based on an experiment and P(C) also but only get an estimate of P(A|B∩C) through the identity. – 2012-09-15
-
1@MichaelChernick Stop your bickering. If you fail to understand what I explain (which happens to you a lot, lately), just ask (politely, but I know this is difficult to you). At the moment, I am rather surprised that somebody claiming to be a bio**statistician**, may fall in such a simple trap: contrary to what you assert in your answer, nothing guarantees that B and C are independent. – 2012-09-15
-
0@did Could we say it is uncomputable unless we indicate more specific dependency properties ? – 2012-09-15
-
1@did The reason I don't understand some things you say is because you are often not clear. The OP wanted to know how the conditions A independent of B and A and C dependent affect the computation of the conditional probability. The answer would be that you would need to know P(A∩B∩C) and P(B∩C) since the conditions do not allow further simplification. Saying you can't calculate it does not really answer the question. – 2012-09-15
-
0@XingdongZuo Exactly. For example, if $B$ is independent of $(A,C)$, then, for any dependence or independence between $A$ and $C$, $P(A\mid B\cap C)=P(A\mid C)$. After that, it all depends on the details of the dependence between $A$ and $C$. – 2012-09-15
-
1@MichaelChernick Suuure... If only you could stop polluting the discussion. – 2012-09-15
-
0@XingdongZuo (Sorry for the interruption.) ...And, as you might know, $B$ independent of $(A,C)$ is a strictly stronger condition than $B$ independent of $A$ and $B$ independent of $C$. – 2012-09-15
-
0@SeyhmusGüngören The sample space must have _at least_ $4$ elements because if we assume that independent events $A$ and $B$ have probabilities such that $0 < P(A), P(B) < 1$ to avoid trivial special cases, then all $4$ disjoint events $AB, AB^c, A^cB, A^cB^c$ _must_ be non-empty since all four events have nonzero probability: $$P(AB) = P(A)P(B), P(AB^c) = P(A)P(B^c),P(A^B)=P(A^C)P(B), P(A^cB^c) = P(A^c)P(B^c).$$ – 2012-09-16
3 Answers
3
Regardless of whether $B$ is independent or not from $C$, one way to calculate this probability should be: $$P(A|B\cap C)=P(A\cap B \cap C)/P(B\cap C)$$and, $$P(A\cap B \cap C)=P(A \cap B)+P(C)-P((A\cap B) \cup C)\\=P(A)P(B)+P(C)-P((A\cap B) \cup C)$$ It appears to me that no further simplification can be done given the information in the question.
0
P(A|B∩C)=P(A∩B∩C)/P(B∩C). If B is independent of C this can be further broken down to
P(A∩B∩C)/[P(B)P(C)]=P[A∩C|B]/P(C) Also it is NOT equal to P(A). There does not appear to be any further simplification without further assumptions.
-
0If B was dependent on C, and C is dependent on A (as given) then doesn't this imply A is dependent on B, and therefore since B is independent of A, then B is independent of C? – 2012-09-15
-
0@chris He didn't say that B is dependent on C. – 2012-09-15
-
0Agreed. My above comment is a proof by Reductio ad absurdum that B is independent of C. – 2012-09-15
-
0Yes I see B is independent of C and the above reduction does work. Thanks. I will modify my answer, – 2012-09-15
-
0I think did is right here. I will revise again, sorry. – 2012-09-15
-
1*I see B is independent of C and the above reduction does work*... Absolutely not. @Chris Sorry but your comment does not prove that B is independent of C (the implication does not hold). Exercise: Find events A, B and C such that A is independent of B, A is not independent of C and B is not independent of C. Hint: This can be done on a probability space of size 4. – 2012-09-15
-
0This should have been been obvious for example take Y and Z independent rvs U[0.1]. Let Y be equal to Y$^2$. Y is independent of Z and since X=Y$^2$ Z is independent of X. But clearly X and Y are dependent. – 2012-09-15
-
0@MichaelChernick I am not sure if did and you are talking the same thing. He is talking about a sample space of size $4$, this means you will have $4$ events, say $A$, $B$, $C$ and $D$. – 2012-09-15
-
0@did how about sample space of size $3$? – 2012-09-15
-
0@SeyhmusGüngören I think he was suggesting one example. I was not trying to give his example but rather some example that is easy to explain. – 2012-09-15
-
0@SeyhmusGüngören A sample space of size 4 means there exists 2^4=16 events. // *how about sample space of size 3?* Well, how about it? – 2012-09-15
-
0@did it is a nice space)) sorry I checked the internet for the size of probability space and I saw mistakenly the size as $4$ meaning $4$ events. Then I thought if we would have not $4$ but $3$ events what would have happened. I am sorry I couldnt give such an example but I wondered if there was something special with the number $4$. – 2012-09-15
-
1@SeyhmusGüngören This is just the smallest size for which I saw a straightforward example. Still smaller sizes might be possible, I did not check (but I think that to determine the simplest possible (counter-)example is often a good way to **really** understand what is going on). – 2012-09-15
-
0@did yes, I am neither able to understand how you can see such examples straightforwardly nor easily understand your solutions. I only understand that they are so good. We just memorized mathematic in the bachelor and in masters, we only used it. In PhD I understood that I have zero math. I am able to use my brain but with abstact thinking I am zero. I must learn more and more..((( – 2012-09-15
-
0@SeyhmusGüngören Thanks for the kind words (but you might relax about your abilities, to me you seem to have a brain and to be willing to use it--and this is all one needs, ain't it...). – 2012-09-15
0
When A and C are dependent on each other but A is independent of B, it simply turns out:
P(A|B∩C)=P(A|C).