5
$\begingroup$

$\{A_i, i \in \mathbb{N} \}$ are defined to be independent, if $P(\cap_{k=1}^{n} A_{i_k}) = \prod_{k=1}^{n} P(A_{i_k}) $ for any finite subset of $\{A_i, i \in \mathbb{N} \}$.

  1. We know $P(\cup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) $ iff $\{A_i , i \in \mathbb{N}\}$ are disjoint, which is independent of the probability measure and purely depends on the relation between the sets. I was wondering if it is possible to similarly characterize/interpret $\{A_i , i \in \mathbb{N}\}$ being independent purely from relation between sets, and make it independent of the probability measure as much as possible if completely is impossible?
  2. Is the definition of $\{A_i, i \in \mathbb{N} \}$ being independent equivalent to $P(\cap_{i=1}^{\infty} A_{i}) = \prod_{i=1}^{\infty} P(A_{i}) $. What is the purpose of considering any finite subset instead?
  3. Is generalization of independence from probability space to general measure space meaningful?

    The only interpretations of independence I know are: measure can be exchanged with product/intersection on independent sets, and intuitively, independent events occur independently of each other. Are there other interpretation, especially in the general measure space setting?

Thanks and regards!

  • 0
    @Didier: Yes, thanks!2011-05-01

2 Answers 2

2

Not sure your point 2 was addressed, so let me state that defining independence as suggested would lead to a trivial notion, quite different from independence as one wants it.

To wit, any collection of sets $(A_i)_{i\ge1}$, finite or infinite, could be made part of a larger collection $(A_i)_{i\ge0}$ such that the condition stated in 2 holds: simply add $A_0=\emptyset$. One would be led to say that a sequence is independent while one of its subsequences is not.

So, the problem has nothing to do with infinite sequences: to define the independence of $A$, $B$ and $C$ by the only condition that $P(A\cap B\cap C)=P(A)P(B)P(C)$ (thus forgetting the supplementary conditions that $P(A\cap B)=P(A)P(B)$, $P(B\cap C)=P(B)P(C)$ and $P(A\cap C)=P(A)P(C)$) already leads to a notion too weak to model any kind of independence, since the supplementary conditions I just wrote can fail.

0

For the first question - you can easily find any counterexamples. Independence is strongly connected with a probability measure. Each two sets $A,B$ can be made independent (non-independent) by choosing an appropriate measure $P$ if $A\cap B \neq \emptyset$. If $A\cap B = \emptyset$ then $A,B$ are always non-independent.

For the second question - it seems to me that definitions are the same. Just for the classic definition you don't use infinite product which is a messy stuff.

Edited: for non-intersecting $A,B$ one can define $P:P(A) = P(B) = 0$ but in my opinion these sets are not independent in the usual sense since their measure is zero. Indeed, if the r.v. takes a value in $A$ it never can take a value in $B$ - this is a non-trivial information which makes occurrence of $A$ and $B$ dependent.

About interpretation - I was also asking the same question some time ago on the other site. I will say that probability theory is measure theory + independence + conditioning (as for me, $P(\Omega)=1$ is just a rescaling though very important). Independence in fact comes from the Cartesian product of sets which then becomes independent for the product space.

Imagine e.g. rectangle - then for any $x$ the interval for $y$ is the same. But on the circle for any $x$ the interval for $y$ changes, so $y$ "depends" on $x$ and circle cannot be presented as a Cartesian product in $x,y$. so, independence comes from this facts. Conditioning comes from the experiments and it's more non-trivial property then independence (in the sense that it appears only in the probability theory).

  • 0
    @Gortaur Re your answer to point 2, it is false. As I explain in the post I referred you to, the problem is not in *messy infinite products* at all, but in the fact that to ask only that the probability of the full intersection is the product of the probabilities would make *any* pair of events a part of a larger family of independent events. Correct definitions notwithstanding, to say that $A$ and $B$ can be dependent while $A$, $B$ and $C$ are independent seems phony.2011-04-18