I was trying to convince myself of that law and managed to do so on an intuitive level (thinking about events as a probability tree), and the idea that you're always going to have $2^k$ possible outcomes because each of $k$ atomic events in the sample space can either happen or not happen.
However, I was trying to prove the law formally and I'm not sure how to generalize it. What I have so far is only for the case of two events $A$ and $B$:
We have that all possible outcomes are:
$$P(A) \cdot P(B) + P(A) \cdot (1-P(B)) + (1-P(A)) \cdot P(B) + (1-P(A)) \cdot (1-P(B))$$ $$= P(A) \cdot [P(B) + (1-P(B))] + (1-P(A)) \cdot [P(B) + (1-P(B))]$$ $$= P(A) \cdot 1 + (1-P(A)) \cdot 1 $$$$= P(A) + 1 - P(A)$$ $$= 1$$
How do you generalize this so that the proof works for any number of events?