1
$\begingroup$

On a probability space, from Billingsley's Probability and Measure,

a family of classes of events is said to be independent, if we arbitrarily choose an event from each class in the family, and these events are then independent.

According to the above definition, I deduce the following definition:

An event $A$ is said to be independent of events $B_1, B_2,\dots , B_m$ if $A$ and $B_i$ are independent, $\forall i =1, \dots, m$.

However, I saw from someone's private note that

An event $A$ is said to be independent of events $B_1, B_2,\dots , B_m$ if for evew subset $S$ of $\{1, 2,\dots ,m\}$, $ P[A |\cap_{i \in S} B_i] = P[A]. $

  1. Are the two different definitions equivalent?

    In other words, is the following true:

    $A$ and $B_i$ are independent, $\forall i =1, \dots, m$, if and only if for evew subset $S$ of $\{1, 2,\dots ,m\}$, $ P[A |\cap_{i \in S} B_i] = P[A]. $

  2. I wonder how to define independence between an event and a set of events correctly?
  3. Same questions when there are infinitely many events $B_i, i \in I$?
  4. Added: What is the definition of independence of an event and a set of events in Lovasz local lemma?

Thanks!

1 Answers 1

4

The two are not equivalent (I suspect the first definition may have been meant for a more specific context.)

Easy counterexample are variables $B_1,B_2$ that independently take values $-1$ and $1$ with chance half and $A=B_1B_2$. Then $A$ is independent of $B_1$ and $B_2$ separately but is fully determined by the two of them.

  • 0
    http://www.cse.buffalo.edu/~hungngo/classes/2011/Spring-694/lectures/lm.pdf was fairly explicit w.r.t. to independence.2012-10-11