5
$\begingroup$

$\{A_i, i \in \mathbb{N} \}$ are defined to be independent, if $P(\cap_{k=1}^{n} A_{i_k}) = \prod_{k=1}^{n} P(A_{i_k}) $ for any finite subset of $\{A_i, i \in \mathbb{N} \}$.

  1. We know $P(\cup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) $ iff $\{A_i , i \in \mathbb{N}\}$ are disjoint, which is independent of the probability measure and purely depends on the relation between the sets. I was wondering if it is possible to similarly characterize/interpret $\{A_i , i \in \mathbb{N}\}$ being independent purely from relation between sets, and make it independent of the probability measure as much as possible if completely is impossible?
  2. Is the definition of $\{A_i, i \in \mathbb{N} \}$ being independent equivalent to $P(\cap_{i=1}^{\infty} A_{i}) = \prod_{i=1}^{\infty} P(A_{i}) $. What is the purpose of considering any finite subset instead?
  3. Is generalization of independence from probability space to general measure space meaningful?

    The only interpretations of independence I know are: measure can be exchanged with product/intersection on independent sets, and intuitively, independent events occur independently of each other. Are there other interpretation, especially in the general measure space setting?

Thanks and regards!

  • 0
    Part 1. $P(\cup_{i=1}^{\infty} A_i)= \sum_{i=1}^{\infty} P(A_i)$ does not necessarily implies $\{A_i , i \in \mathbb{N}\}$ are disjoint, not even in the two events case, because the intersection can be nonempty, but has zero probability.2011-04-13
  • 0
    Your *iff* statement in (1) is false. For a simple counterexample, consider abutting closed intervals on $[0,1]$ and Lebesgue measure. More "exotic" examples can also be constructed.2011-04-13
  • 0
    @cardinal: we are talking about the same thing. One can fix the measure and find "exotic" sets, or one can also fixed any non-disjoint set and simply define a probability measure, with the intersection having zero measure.2011-04-13
  • 0
    @GWu, there was just a severe (several minute) delay in my comment getting posted due to my internet connection. Your first comment wasn't there when I initially submitted mine. :)2011-04-13
  • 0
    @GWu: Thanks! Then is it correct that measure and union/summation can be exchanged iff the intersection of any sub-collection of the class of sets has zero measure?2011-04-13
  • 0
    @Tim: I think you are right. You can check http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle $$\sum_{i=1}^n \mathbb{P}\left(A_i\right) -\sum_{i,j\,:\,1 \le i < j \le n}\mathbb{P}\left(A_i\cap A_j\right) \le \mathbb{P}\biggl(\bigcup_{i=1}^n A_i\biggr) \le \sum_{i=1}^n\mathbb{P}\left(A_i\right) $$2011-04-13
  • 0
    Tim: Does any of the answers below satisfy your query?2011-05-01
  • 0
    @Didier: Yes, thanks!2011-05-01

2 Answers 2

2

Not sure your point 2 was addressed, so let me state that defining independence as suggested would lead to a trivial notion, quite different from independence as one wants it.

To wit, any collection of sets $(A_i)_{i\ge1}$, finite or infinite, could be made part of a larger collection $(A_i)_{i\ge0}$ such that the condition stated in 2 holds: simply add $A_0=\emptyset$. One would be led to say that a sequence is independent while one of its subsequences is not.

So, the problem has nothing to do with infinite sequences: to define the independence of $A$, $B$ and $C$ by the only condition that $P(A\cap B\cap C)=P(A)P(B)P(C)$ (thus forgetting the supplementary conditions that $P(A\cap B)=P(A)P(B)$, $P(B\cap C)=P(B)P(C)$ and $P(A\cap C)=P(A)P(C)$) already leads to a notion too weak to model any kind of independence, since the supplementary conditions I just wrote can fail.

0

For the first question - you can easily find any counterexamples. Independence is strongly connected with a probability measure. Each two sets $A,B$ can be made independent (non-independent) by choosing an appropriate measure $P$ if $A\cap B \neq \emptyset$. If $A\cap B = \emptyset$ then $A,B$ are always non-independent.

For the second question - it seems to me that definitions are the same. Just for the classic definition you don't use infinite product which is a messy stuff.

Edited: for non-intersecting $A,B$ one can define $P:P(A) = P(B) = 0$ but in my opinion these sets are not independent in the usual sense since their measure is zero. Indeed, if the r.v. takes a value in $A$ it never can take a value in $B$ - this is a non-trivial information which makes occurrence of $A$ and $B$ dependent.

About interpretation - I was also asking the same question some time ago on the other site. I will say that probability theory is measure theory + independence + conditioning (as for me, $P(\Omega)=1$ is just a rescaling though very important). Independence in fact comes from the Cartesian product of sets which then becomes independent for the product space.

Imagine e.g. rectangle - then for any $x$ the interval for $y$ is the same. But on the circle for any $x$ the interval for $y$ changes, so $y$ "depends" on $x$ and circle cannot be presented as a Cartesian product in $x,y$. so, independence comes from this facts. Conditioning comes from the experiments and it's more non-trivial property then independence (in the sense that it appears only in the probability theory).

  • 0
    Thanks! (1) Is generalization of independence from probability space to general measure space meaningful? (2)The only interpretations of independence I know are: measure can be exchanged with product/intersection on independent subsets, and intuitively independent events occur independently of each other. Are there other interpretation, especially in the general measure space setting?2011-04-13
  • 0
    @Gortaur, counterexample to your statement on nonindependence: Suppose $A \neq \emptyset$ but $\mathbb{P}(A) = 0$.2011-04-13
  • 0
    @cardinal. Then what?2011-04-13
  • 0
    @Gortaur: Did you get it?2011-04-13
  • 0
    @cardinal: no, but I provided another example.2011-04-13
  • 0
    @Tim - I commented on your comment.2011-04-13
  • 1
    @Gortaur, here's an example. Consider the space $([0,1], \mathcal{B}, \mathcal{L})$ and take $A = \mathbb{Q} \cap [0,1]$ and $B = [0,1] \setminus A$. Then $A \cap B = \emptyset$, but $A$ is independent of $B$. This "feature" of the definition of independence that you appear to consider pathological actually has quite a deep consequence in the form of the Kolmogorov 0-1 law.2011-04-13
  • 0
    Ok, then thank you for this example.2011-04-14
  • 0
    @Gortaur: The first paragraph of the edited part of your answer is misleading: any event of zero probability is independent on any other event and even on itself. This is the usual definition--and in fact the only one that makes sense (and I fail to understand the last sentence of this paragraph of your post).2011-04-18
  • 0
    @Gortaur: As regards your answer to point 2, see my post.2011-04-18
  • 0
    @Didier Piau. I didn't understand your last comment. With regards to your 1st comment - I just mean that independence as $P(A\cap B) = P(A)P(B)$ is just a criteria appeared from empirical ideas. The main feature of independence - that if $A$ occurs we will not have any information about $B$. On the other hand if $A\cap B = \emptyset$ then we know *for sure* that if $A$ occurs then $B$ doesn't occur.2011-04-18
  • 0
    @Gortaur In the first paragraph of the edited part, you state that two events are not independent because their common probability is zero. This makes no sense. I recalled in my comment that in fact *any* event of probability zero (or one, for that matter) is independent of any other event. And the only case when two disjoint events are independent is when at least one of them has zero probability. The verbose definition of independence you seem to rely on is dangerous insofar as one does not know what it means for an event to occur when one knows that another event of probability zero occurs.2011-04-18
  • 0
    @Gortaur Re your answer to point 2, it is false. As I explain in the post I referred you to, the problem is not in *messy infinite products* at all, but in the fact that to ask only that the probability of the full intersection is the product of the probabilities would make *any* pair of events a part of a larger family of independent events. Correct definitions notwithstanding, to say that $A$ and $B$ can be dependent while $A$, $B$ and $C$ are independent seems phony.2011-04-18