0
$\begingroup$

I was trying to convince myself of that law and managed to do so on an intuitive level (thinking about events as a probability tree), and the idea that you're always going to have $2^k$ possible outcomes because each of $k$ atomic events in the sample space can either happen or not happen.

However, I was trying to prove the law formally and I'm not sure how to generalize it. What I have so far is only for the case of two events $A$ and $B$:

We have that all possible outcomes are:

$$P(A) \cdot P(B) + P(A) \cdot (1-P(B)) + (1-P(A)) \cdot P(B) + (1-P(A)) \cdot (1-P(B))$$ $$= P(A) \cdot [P(B) + (1-P(B))] + (1-P(A)) \cdot [P(B) + (1-P(B))]$$ $$= P(A) \cdot 1 + (1-P(A)) \cdot 1 $$$$= P(A) + 1 - P(A)$$ $$= 1$$

How do you generalize this so that the proof works for any number of events?

1 Answers 1

1

You're confusing the following things, both of which refer to a sample space $S$:

Law of total probability

If events $B_1,B_2,B_3,...$ form a countable partition of $S$, then for any event $A\subseteq S$, $$A = \bigcup_{i} (A\cap B_{i}) $$ hence $$P(A) = \sum_{i} P(A\cap B_{i}). $$

Minterm expansion theorem

If events $B_1,B_2,...,B_n$ form a finite partition of $S$, then $$ S=\bigcup_{i_1i_2...i_n\in \{0,1\}^n} (B_1^{i_1}\cap B_2^{i_2}\cap ...\cap B_n^{i_n}) $$ where $B^1$ denotes $B$, and $B^0$ denotes the complement $\overline{B}$.

Hence $$1= P(S) = \sum_{i_1i_2...i_n\in \{0,1\}^n} P(B_1^{i_1}\cap B_2^{i_2}\cap ...\cap B_n^{i_n}).$$

NB:

Combining the above, it follows that if events $B_1,B_2,...,B_n$ form a finite partition of $S$, then for any event $A\subseteq S$, $$A = \bigcup_{i_1i_2...i_n\in \{0,1\}^n} (A\cap B_1^{i_1}\cap B_2^{i_2}\cap ...\cap B_n^{i_n}) $$ hence $$P(A) = \sum_{i_1i_2...i_n\in \{0,1\}^n} P(A\cap B_1^{i_1}\cap B_2^{i_2}\cap ...\cap B_n^{i_n}). $$