2
$\begingroup$

I have the following question and could not find an answer in any other thread:

Let $A$, $B$ and $C$ be random vectors and let it be given, that:

  1. $A$ is independent of $B$
  2. $(A, B)$ is independent of $C$

How can I show, that this implies $A$ independent of $C$ ?

It seems intuitive, but I am looking for a formal proof.

I know, that from (1.) and (2.) follows, that:

  1. $f_{ABC}(a,b,c) = f_{ABC}((a,b),c) = f_{AB}(a,b)\cdot f_{C}(c) = f_{A}(a)\cdot f_{B}(b)\cdot f_{C}(c)$

With $a, b, c$ being observations of $A, B, C$ respectively. Does this help at all?

I guess, I want to get an equation like this: $f_{AC}(a,c) = f_A(a) \cdot f_C(c)$

Any help/resource reference is greatly appreciated!

2 Answers 2

0

I know, that from (1.) and (2.) follows, that:

  1. $f_{ABC}(a,b,c) = f_{ABC}((a,b),c) = f_{AB}(a,b)\cdot f_{C}(c) = f_{A}(a)\cdot f_{B}(b)\cdot f_{C}(c)$

With $a, b, c$ being observations of $A, B, C$ respectively. Does this help at all?

Certainly.   Use the Law of Total Probability : $$\begin{align}f_{A,C}(a,c) ~&=~ \int_{B(\Omega)} f_{A,B,C}(a,b,c)\operatorname d b \\ f_A(a) ~&=~ \int_{B(\Omega)} f_{A,B}(a,b)\operatorname d b \end{align}$$

Now can you show that $\bbox[ghostwhite]{~f_{A,C}(a,c) = f_A(a)\cdotp f_C(c)~}$ (for any $a,c$ in the supports of $A,C$ respectively)?

  • 1
    Ah! Thank you! That is a nice way of proving it. Using the formulas you derived from the Law of Total Probability, I can state that: $ f_{A,C}(a,c) = \int_{B(\Omega)} f_{A,B,C}(a,b,c)db = f_C(c)\cdot\int_{B(\Omega)} f_{A,B}(a,b)db = f_C(c)\cdot f_A(a)$2017-01-21
0

\begin{align*}P(A \in I_1,B \in I_2,C \in I_3) &= P( (A,B) \in I_1 \times I_2, C \in I_3)\\ & = P((A,B) \in I_1\times I_2)\times P(C \in I_3)\\ & = P(A\in I_1,B \in I_2) \times P(C \in I_3) \\ & = P(A \in I_1) \times P(B \in I_2) \times P(C \in I_3). \end{align*}

  • 0
    Thank you for the response! However, I don't quite see, how this is proving the statement mentioned above. Isn't this just rephrasing, what I claimed to be true in (3.)?2017-01-20
  • 0
    The equality of the expression on the RHS and LHS is what DEFINES independence of $A,B,C$.2017-01-20
  • 0
    Yes, I see that, but threeway independence does not imply pairwise independence, or am I missing something here?2017-01-20
  • 0
    Perhaps you meant pairwise does not implies three-wise. This is a correct statement. What you wrote is wrong. I recommend carefully writing down the definition of independence. Hope this helps.2017-01-20
  • 0
    Can you recommend any resources for further reading, that shows, that three-wise independence does indeed imply pairwise independence? I was assuming, that my statement is correct, because of the following (very simple) counterexample using random variables: [http://www.engr.mun.ca/~ggeorge/MathGaz04.pdf](http://www.engr.mun.ca/~ggeorge/MathGaz04.pdf)2017-01-20
  • 0
    Your source is fine. In equation (1), take either A, B or C to be the entire sample space, S. What do you get ? I don't have a book at hand, but any standard textbook in probability (e.g. Ross) should be fine.2017-01-20
  • 0
    Trying to clarify. Since we have independence for any choice of I1, I2, I3, you can always take one of them to be the entire sample space. This is equivalent to having either A,B or C in equation (1) in your source to be the entire sample space S. As for references, any textbook (e.g. Ross) is fine. Your source is okay too, but note that the events there are replaced by the events A in I1, etc here.2017-01-20
  • 0
    Oh, ok, I think I got it now. So if we have three-wise independence for arbitrary sets I1, I2, I3, we can conclude pairwise independence, as any set can be defined as the sample space S. And in my source three-wise independence does not imply pairwise independence, because it is only three-wise independent for specific sets A, B and C. If I replace $I_2$ with $S$ on the RHS and LHS of your contribution, I get $P(A \in I_1, C \in I_3) = P(A \in I_1) \cdot P(C \in I_3)$. Great, thanks a lot for your explanation and patience!2017-01-21