4
$\begingroup$

I'm trying to show that given $(X_i)$ i.i.d., $E[X_i^2] < \infty$, $\mu := E[X_i]$ then $P\Big [ \lim_{n \rightarrow \infty} \frac{S_n}{n} = \mu \Big ] = 1$ where $ S_n := \sum_{k=1}^n X_k$.

So far, I have rewritten $P\Big [ \lim_{n \rightarrow \infty} \frac{S_n}{n} = \mu \Big ]$ as

$ \lim_{k \rightarrow \infty} P \Big [ \cap_{n \geq k} \{ \omega \Big | |\frac{S_n}{n} - \mu| < \varepsilon \} \Big ]$

But I'm not sure how to proceed from here. I have \sum_{n} P\Big [ |X_n - X|  > \varepsilon \Big] < \infty \Rightarrow P\Big [ \lim_n X_n = X \Big ] = 1 which I think I should apply but I don't see how. Can anyone help me with this? Also, I don't see where $E[X_i^2] < \infty$ comes in. Many thanks for your help!

  • 0
    Hint: Note that P[Y^2 > 1] < E[Y^2] for any $Y$. So P[(X - \mu)^2 > C] = P[(X - \mu)^2/C > 1] < E[(X-\mu)^2/C] = \text{Var}[X]/C, where $\mu=E[X]$ and $\text{Var}[X]=E[X^2]-\mu^2$, for any C>0. What can you prove about $\text{Var}[S_n]$?2011-02-10

2 Answers 2

6

A proof of the strong law of large numbers can be more or less complicated depending on your hypotheses. In your case, since you assume that $E[X_i^2]<\infty$ there is a straightforward proof. I am taking this from section 7.4 of the third edition of Probability and Stochastic Processes by Grimmett and Stirzaker.

First, by splitting into positive and negative parts we can assume (without loss of generality) that $X_i\geq 0$.

Second, using the positivity, it suffices to prove that $S_{n^2}/n^2\to\mu$ almost surely; that is, we only need convergence along that subsequence.

Next, Chebyshev's inequality gives
$P(|S_{n^2}/n^2-\mu|>\varepsilon_n)\leq{E[X_i^2]\over n^2\varepsilon_n^2}.$

Choosing $\varepsilon_n\downarrow 0$ so slowly that the right hand side above is summable, Borel-Cantelli finishes the job since then $P(|S_{n^2}/n^2-\mu| \leq \varepsilon_n \mbox{ for all but finitely many }n) = 1.$

In fact, the strong law of large numbers holds under the weaker hypothesis that $E[|X_i|]<\infty$. There are various proofs in the literature, but every student of probability ought to be familiar with Etemadi's tour de force elementary proof. Etemadi uses a clever truncation argument and similar tools to those above, and only needs pairwise independence of the $X_i$'s, not full independence. Some good textbooks like Grimmett and Stirzaker (section 7.5), Billingsley's Probability and Measure (2nd edition), or Durrett's Probability: Theory and Examples (2nd edition) include Etemadi's treatment.

N. Etemadi, An elementary proof of the strong law of large numbers, Z. Wahrscheinlichkeitstheorie verw. Gebeite 55, 119-122 (1981)

  • 1
    I think that Billingsley's book is most encyclopedic and would be quite useful in the personal library of any analyst or mathematician. Karl Stromberg's "Probability for analysts" is also worth considering. He explains some of the core theorems of probability from a Fourier analysis point of view.2011-02-11
3

This is the strong law of large numbers.

If you want a measure theoretic proof, check out these lecture notes: http://staff.science.uva.nl/~spreij/onderwijs/master/mtp.pdf

It is in paragraph 10.4.