1
$\begingroup$

Suppose a sequence $U_n$ of random variables satisfies the following conditions: For each $n$,

$P(U_n = 1) = 1/n$ and $P(U_n = 0) = 1 - 1/n.$

Can someone tell me whether this converges almost surely to 0? I know it converges to 0 in $L_1$. I'd say it converges to 0 a.s as well because $P(U_n = 0) = 1- 1/n \to 1$, which I think is consistent with the definition of a.s convergence, but apparently according to some exercise I'm wrong.

Can someone tell me what is correct?

  • 1
    Consider the functions $\chi_{[0,1]}$, $\chi_{[0,1/2]}$, $\chi_{[1/2,1]}$, $\chi_{[1,1/3]}$, $\chi_{[1/3,2/3]}$, $\,\ldots$. This sequence converges to $0$ in $L_1([0,1])$; but the sequence converges pointwise nowhere. (The sequence can be thought of as a line segment of height $1$ and width tending to $0$ that forever slides across the entire interval $[0,1]$ (if that makes any sense).) A sequence similar to this can be constructed satisfying your conditions (note $\sum(1/n)$ diverges here).2012-12-12

2 Answers 2

3

Let $\Omega=[0,1]$ endowed with the Borel $\sigma$-algebra and the uniform distribution. Define inductively two sequences $a_n$ and $b_n$ such that $a_n=0$, $b_1=1$, $a_{n+1}=b_n$ $b_{n+1}=a_{n+1}+1/n\textrm{ mod } 1$. For any $n$, if $a_n let $U_n(\omega)=1$ if $\omega\in[a_n,b_n]$ and $0$ otherwise. If $a_n>b_n$ let $U_n(\omega)=1$ if $\omega\geq a_n$ or $\omega\leq b_n$ and $0$ otherwise. You can check that this satisfies your assumptions, but for all $\omega$, $U_n(\omega)=1$ for infinitely many $n$. The latter folows from the divergence of the harmonic series.

  • 0
    @lezubulon A random variable is a measurable function $f:\Omega\to\mathbb{R}$ defined on some probability space $(\Omega,\Sigma,\mu)$. You cannot discuss almost sure convergence without actually looking at this probability space.2012-12-12
1

This is one typical example for showing that almost-sure convergence is a strong property: this sequence does not converge a.s, even if it converges (to zero) in probability.

To get an informal insight: imagine you have a lot (say, 1000) realizations of this process. Ask yourself how many of this realizations will converge to 0. Recall that, for each fixed realization, what we have is a plain numeric sequence, so we are speaking of the common (non-random) sequence convergence now. But our sequences are composed of 0's and 1's; hence a sequence will converge to zero only if it has finite number of ones; ie. if $\exists n_j,$ $x[n_j]=1; x[n]=0 , n>n_j$. Intution says that is not very probable that this will happen for the majority of our 1000 realizations; actually, on the contrary, it's extremely improbable that it happens ever.

We can compute the probability explicitly (this not necessary, nor even usual, to prove/disprove convergence). Let $A_k$ be the event $x[k]=1; x[n]=0 , n>k$. By telescoping the product we get that each probability (for fixed $k$) is zero:

$P(A_k)= \frac{1}{k} \prod_{n=k+1}^{\infty} \left(1 - \frac{1}{n} \right)= \frac{1}{k} \frac{k}{k+1}\frac{k+1}{k+2} \cdots = 0$

The probability of convergence is $\sum_k P(A_k)$ ; but the (countable) union of events of probability zero has probability zero. Hence, the probability is not one (as we should have got, if convergence was almost sure), but, on the contrary, zero.