0
$\begingroup$

I am confronted with the following argument which I think may not be right:

Let $(X_n)$ be a sequence of independent random variables s.t.

$P[X_n = 1] = 1- P[X_n = 0 ] = \frac{1}{n}$

in order to use Borell Cantelli we recall

$ X_n \to X \quad \text{ a.s. iff } \quad P[|X_n - X| > \epsilon \text{ i.o.}] = 0$

But $ P[|X_n - X| > \epsilon \text{ i.o.}] = 1$ by Borel Cantelli II as

$\sum^{\infty} P[X_n=1] =\sum^{\infty}{\frac{1}{n}} = \infty$

I m not firm enough yet with this material to make my argument solid why I think this may not be right. However I am wondering whether the $X_n$ are truely independent the way they are defined, i.e. can we actually use Borel Cantelli ?

Secondly, is it not the case, that $X_n$ converges pointwise to zero for every $\omega \in (0,1)$? Directly from the Definiton of convergence a.s. this would lead me to conclude, that $X_n \to 0$ a.s. right ? (I m probably assuming the standard Lebesgue measure on $[0,1)$ was implicit in the original argument that I find myself confronted with.)

  • 0
    The theorem seems correct, but I'm having trouble seeing why it seems to you that $X_{n}\rightarrow 0$ pointwise?2012-01-27

1 Answers 1

3

Since the series $\sum\limits_n\mathrm P(X_n=1)$ diverges and the random variables $X_n$ are independent, the second Borel-Cantelli lemma tells you that $X_n=1$ infinitely often, that is, that the random set $ I=\{n\geqslant1\mid X_n=1\} $ is almost surely infinite. The series $\sum\limits_n\mathrm P(X_n=0)$ diverges as well, hence $X_n=0$ infinitely often, that is, the random set $\{n\geqslant1\mid X_n=0\}$ is almost surely infinite. In other words $I$ is almost surely infinite and co-infinite.

Regarding the question in your last paragraph, note that one does not need to assume that the probability space is $\Omega=(0,1)$ or $\Omega=[0,1)$ or anything else. In fact, as almost always in probability, the specification of the probability space $\Omega$ is irrelevant. In your case, for any independent sequence $(X_n)_{n\geqslant1}$ of Bernoulli random variables defined on whichever probability space you want and such that $\mathrm P(X_n=1)=1/n$ for every $n\geqslant1$, the event $[X_n\to0]$ has probability zero.

A word of caution: if, however, one insists on specifying a probability space and one chooses $\Omega=(0,1)$ with the Lebesgue measure, then the random variables $Y_n$ defined by $Y_n(\omega)=1$ if $\omega\leqslant1/n$ and $Y_n(\omega)=0$ otherwise, which you seem to have in mind as a realization of the sequence $(X_{n\geqslant1})$, are in fact far from being independent. To wit, $[Y_2=0]=(1/2,1)$ and $[Y_3=1]=(0,1/3]$ hence $[Y_2=0,Y_3=1]=\emptyset$, hence $ \mathrm P(Y_2=0,Y_3=1)=0\ne1/6=\mathrm P(Y_2=0)\mathrm P(Y_3=1). $ Other, cleverer, definitions of the random variables $Y_n$ on $\Omega=(0,1)$ exist, which produce the correct joint distribution, but this one does not.

  • 0
    ok, I see. So it seems my trouble came from the fact that I had a wrong manifestation of the sequence in mind. So if I understand your first paragraph correctly it is indeed that case that the sequence does **not** converge to 0 almost surely right !2012-01-27