2
$\begingroup$

We have a sequence of random variables $x_1, x_2, x_3,...$ that are independent and are $N(0, 1/n)$ random variables. We want to show that $(x_1)^2 + (x_2)^2 + (x_3)^2 +...$ converges in probability to 1.

I have tried using Borel-Cantelli Lemma, but I am unsuccessful. Then, a lightbulb clicked, and I thought maybe the $x_n$ are converging to a delta function, so this may be the reason why. However, I cannot prove this rigorously either. This is mildly frustration; how does one approach these problems, and how do you show this?

  • 0
    Sasha's proof seems to assume on the one hand that the Xi are fixed random variables but on the other hand by assuming their variance is 1/n they are changing with increasing n. This is an inconsistency.2012-05-18

3 Answers 3

4

Let $X_k$ be i.i.d. normal random variables with zero mean and variance $\sigma^2 = \frac{1}{n}$. Define $Y_n = \sum_{k=1}^n X_k^2 = \frac{1}{n} \sum_{k=1}^n Z_k^2$, where $Z_k$ are i.i.d. standard normal variables. (Already from this form, $Y_n \to \mathbb{E}(Y_n)$ with probability 1 by the law of large numbers).

Clearly: $ \mathbb{E}(Y_n) = \frac{1}{n} \sum_{k=1}^n \mathbb{E}(Z_k^2) = \frac{1}{n} \sum_{k=1}^n 1 = 1 $ By the law of total variance: $ \mathbb{Var}(Y_n) = \frac{1}{n^2} \sum_{k=1}^n \mathbb{Var}(Z_k^2) = \frac{1}{n^2} \sum_{k=1}^n \mathbb{E}((Z_k^2-1)^2) = \frac{1}{n^2} \sum_{k=1}^n \left(\underbrace{\mathbb{E}(Z_k^4)}_{=3} - 2 \underbrace{\mathbb{E}(Z_k^2)}_{=1} + 1\right) = \frac{2}{n} $ Now, using Chebyshev's inequality: $ \mathbb{P}\left( |Y_n - 1| > \epsilon \right) < \frac{\mathbb{Var}(Y_n)}{\epsilon^2} $

Hence for arbitrary $\epsilon$, and $\delta > 0$ there exists $m \in \mathbb{N}$, such that for all $n > m$, $\mathbb{P}( | Y_n -1| > \epsilon) < \delta$, i,e. $Y_n$ converges in probability to 1.

  • 0
    @abmlf If you know the meaning of the words in your question, you know the answer.2012-05-19
0

The sum of n independent normals with means mi and variances vi is normal with mean the sum of the means and variance the sum of the variances. So in your case let Sn=x1+x2+..+xn. Sn is normal with mean 0 and variance 1 for each n. Sn does not converge to 1 it is a standard normal variable but its variance is one. So you can say that the variance converges to one. On the other hand if you are looking at Wn=Sn/n. Wn is N(0, 1/n) and so converges in distribution and probability to 0.

  • 1
    I don't understand why my answers are downvoted, especially the first. I was pointing to a flaw in the set up of the problem that got corrected. Is it because the voter thought these should be comments rather than answers? In both cases i gave solutions to the problem that was posed at the time.2012-05-18
-1

To the revised question if $X$ is $N(0,1)$ then $X^2$ is chi-square with 1 degree of freedom. So $(i X_i)^2$ is chi-square with 1 df. It has mean 1 and variance 2. So $X_i^2$ has mean $1/i$ and variance $2/i^2$. Now the mean of the sum of the $X_i^2$ is the partial sum of a divergent series and hence the sum cannot converge. This assumes that $N(0,1/n)$ has standard deviation $1/n$ not the variance. If the variance is $1/n$ then $iX_i^2$ is chi square 1. So the mean of $X_i^2$ is $1/i^2$ and variance $2/i^4$ and the mean of the sum of the $X_i^2$ is a convergent series but does not converge to $1$. It converges to $π^2/6$. The sum of the random variables has the $n$th term approaching $0$ but the sum is a non-degenerate random variable. Note that the assumption here was the Xi is distributed N(0,1/i) as it was originally posed.

  • 0
    I edited so that all of the mathematical terms are typeset using LaTeX - if you like, you can check out the edit to see how the syntax works for your future answers.2012-05-18