1
$\begingroup$

Suppose events $A_1, ..., A_n$ are fully independent, i.e., $P(A_1 \cap ... \cap A_k) = P(A_1)...P(A_k)$ for all $k$ between 2 and $n$. Does this mean that the complementary events are also fully independent: $P(A_1^c \cap ... \cap A_k^c) = P(A_1^c)...P(A_k^c)$ for all k?

I know this holds if $k = 2$, but I want to know in general.

I've tried to prove it by induction but it looks like hard work...

  • 0
    @blackcat: As promised, here is a proof. May edit to add induction argument. Or else you can post an induction argument.2011-10-17

2 Answers 2

5

We show that under the non-standard definition of fully independent events given in the post, the desired result is not true. We then give a standard definition of fully independent events, and show that under this definition the desired result is true.

A counterexample: We toss a fair coin. Assume that the possible events are $A_1$, the coin rolls around forever (probability $0$), $A_2$, we get a head (probability $1/2$) and $A_3$, we get a tail (probability $1/2$). It is easy to verify that under the definition of fully independent given in the post, the sequence $A_1, A_2, A_3$ is fully independent. But $A_2^c$ and $A_3^c$ are not independent, for $P(A_2^c\cap A_3^c)=0$, but $P(A_2^c)P(A_3^c)=1/4$. It is also easy to verify that the sequence $A_1^c, A_2^c, A_3^c$ is not fully independent.

A proof: We first give a standard definition of full independence. The events $A_1,A_2,\dots, A_n$ are fully independent if, whenever $B_1, B_2, \dots B_k$ are distinct $A_i$, $P(B_1\cap B_2 \cap \cdots \cap B_k)=P(B_1)P(B_2)\cdots P(B_k).$ We show that if $A_1, A_2, \dots, A_n$ are fully independent, then so are $A_1^c,A_2^c,\dots, A_n^c$.

There is a not difficult proof by induction. However, we prefer to avoid formal induction, in order to get a proof that has more symmetry. We need to prove that if $B_1, B_2, \dots B_k$ are distinct $A_i$, then $P(B_1^c\cap B_2^c \cap \cdots \cap B_k^c)=P(B_1^c)P(B_2^c)\cdots P(B_k^c).$

To save space, let $b_i=P(B_i)$. So we want to prove that $P(B_1^c\cap B_2^c \cap \cdots \cap B_k^c)=(1-b_1)(1-b_2)\cdots (1-b_k).$

Let $p$ be the probability on the left. Then $1-p=P(B_1\cup B_2 \cup \cdots \cup B_k).$ Thus, by the Principle of Inclusion/Exclusion, $1-p=\sum_{i=1}^k b_i -\sum_{1 \le i and therefore $p=1 -\sum_{i=1}^k b_i +\sum_{1 \le i The right-hand side is just $(1-b_1)(1-b_2)\cdots (1-b_k)$. This completes the proof.

  • 0
    @André Nicolas: How can you say on the last sentence, that the RHS is just that product? How does one prove it?2014-06-24
5

As noted by others, what you write is NOT the definition of the independence of $(A_1,A_2,\ldots,A_n)$. You should ask that $\mathrm P(A_{k_1}\cap A_{k_2}\cap\cdots \cap A_{k_i})=\mathrm P(A_{k_1})\mathrm P(A_{k_2})\cdots \mathrm P(A_{k_i})$ for every choice of all distinct indices $k_j$ in $\{1,2,\ldots,n\}$.

This point set aside, let me mention that a strategy which might help avoid some tediousness in such a context is to translate everything in terms of random variables. I know, random variables are supposed to be always more complicated than events but in fact, the opposite holds quite often (did somebody just say linearity?), and the present question is a good example of the phenomenon.

We first make the, seemingly odd, general remark that, for every event $A$, $ \mathrm P(A)=\int_\Omega\mathbf 1_A\mathrm dP=\mathrm E(\mathbf 1_A), $ where $\mathbf 1_A$ denotes the indicator function of $A$ defined by $\mathbf 1_A(\omega)=1$ if $\omega\in A$ and $\mathbf 1_A(\omega)=0$ if $\omega\in\Omega\setminus A$.

Turning to the question, let us choose $k$ events from the $n$ events $A_1$, $A_2$, ..., $A_n$, all different, rename them as $B_1$, $B_2$, ..., $B_k$, and introduce $B=\bigcap\limits_{i=1}^k(B_i)^c$. One knows that the indicator function of a complement is $1$ minus the original indicator function and that the indicatior function of an intersection is the product of the indicator functions, hence $ \mathbf 1_B=\prod\limits_{i=1}^k(1-\mathbf 1_{B_i})=Q_k(\mathbf 1_{B_1},\mathbf 1_{B_2},\ldots,\mathbf 1_{B_k}), $ where $Q_k$ denotes the polynomial $ Q_k(x_1,x_2,\ldots,x_k)=\prod\limits_{i=1}^k(1-x_i). $ Like every polynomial, $Q_k(x_1,x_2,\ldots,x_k)$ may be expanded into a sum of monomials in the unknowns $x_1$, $x_2$, ..., $x_k$. Since the (partial) degree of $Q_k$ in each $x_i$ is $1$ the expansion of $Q_k$ involves only monomials of the form $x_{i_1}x_{i_2}\cdots x_{i_\ell}$ for some distinct indices $i_j$. In other words, $ Q_k(x_1,x_2,\ldots,x_k)=\sum_Iq_I\prod_{i\in I}x_i, $ where the sum runs over the $2^k$ subsets $I$ of $\{1,2,\ldots,k\}$, for some coefficients $q_I$ whose values will not be relevant. (The interested reader might note however that $q_\varnothing=1$ and $q_{\{1,2,\ldots,k\}}=(-1)^k$, and the motivated one might show that $q_I=(-1)^{|I|}$ for every $I\subseteq\{1,2,\ldots,k\}$.)

We stress that this relation holds between polynomials, hence every choice of the variables $x_i$ yields an equality, whether these are numbers or functions. In particular, evaluating both sides at the functions $\mathbf 1_{B_i}$ yields $ \mathbf 1_B=\sum\limits_Iq_I\prod\limits_{i\in I}\mathbf 1_{B_i}. $ For every $I$, note that $ \prod\limits_{i\in I}\mathbf 1_{B_i}=\mathbf 1_{B_I},\quad \mbox{where}\ B_I=\bigcap\limits_{i\in I}B_i, $ and that the independence of the events $(B_i)_{i\in I}$ yields $ \mathrm E(1_{B_I})=\mathrm P(B_I)=\prod\limits_{i\in I}\mathrm P(B_i). $ Summing this over every $I$ yields $ \mathrm P(B)=\mathrm E(\mathbf 1_B)=\sum\limits_Iq_I\mathrm P(B_I)=\sum\limits_Iq_I\prod\limits_{i\in I}\mathrm P(B_i)=Q_k(\mathrm P(B_1),\mathrm P(B_2),\ldots,\mathrm P(B_k)), $ where the last equality stems from the very definition of $Q_k$ evaluated at the real numbers $\mathrm P(B_i)$. But one knows the value of $Q_k$ at every point, in particular at $(\mathrm P(B_1),\mathrm P(B_2),\ldots,\mathrm P(B_k))$, which is $ \mathrm P(B)=\prod\limits_{i=1}^k(1-\mathrm P(B_i))=\prod\limits_{i=1}^k\mathrm P((B_i)^c), $ and the proof is over. To conclude, note once again that we used the polynomial $Q_k$ twice, once for functions and the other one for real numbers.

  • 0
    thanks for this proof. It's interesting and pretty different to what I was thinking.2011-10-17