0
$\begingroup$

Let $H_i=\sum_{j=1}^i\frac{1}{j}$. Suppose $X_i$ is a sequence of i.i.d. exponential random variables with $P(X_i>x)=e^{-H_ix}$. I want to justify whether or not $X_i$ converges almost surely to some limiting random variable.

Here if we assume all $X_i$'s are random variables from $\Omega=[0,1]$ to non-negative real numbers and the probability measure of [0,1] coincides with its Lebesgue measure. Since $P(X_i\le x)=1-e^{-H_ix}$. We may assume $X_i^{-1}(0)=0$ and $X_i^{-1}(x)=1-e^{-H_ix}$, then we can show $X_i(y)=-\frac{1}{H_i}\ln(1-y)$. Then for all $0\le y<1$, $\lim_{i\to\infty}X_i(y)=0$. Therefore it seems $X_i$ converges to 0 almost surely.

But on the other hand, for $1\ge\epsilon>0$, $P(X_i>\epsilon)=e^{-H_i\epsilon}$. Therefore $\sum_{i=1}^\infty P(X_i>\epsilon)=\sum e^{-H_i\epsilon}\ge \sum e^{-\epsilon\ln i}=\sum \frac{1}{i^\epsilon}=\infty$. Then by Borel-Cantelli lemma, we know $P(X_i>\epsilon$ i.o.)=1, which shows $X_i$ does not converge to 0 almost surely.

So where is wrong and how could we find a limiting r.v. if we don't know it ahead?

  • 0
    Does this version of Borel-Cantelli require independence?2017-01-27
  • 0
    Yes. And at the beginning we assume $X_i$'s are i.i.d. So I have a feeling that there is something wrong in my first method.2017-01-27

2 Answers 2

2

So each $X_i$ is an independent exponential with mean $1/H_i.$ So for $\epsilon>0,$ the events $\{X_i>\epsilon\}$ are independent and $$\sum_i P(X_i>\epsilon) = \infty$$ for $\epsilon \le 1.$ By Borel Cantelli, then, $X_i>\epsilon$ infinitely often a.s. and thus $X_i$ almost surely does not converge to zero. That sounds like good reasoning to me.

The problem with your first line of reasoning is that you fix $y$ for all $i$ and effectively draw only one uniform RV. Then the variables $X_i$ are not independent, as you're supposed to be assuming. What you've done is taken a single uniform and transformed it into an exponential many times, each time with a smaller mean for the exponential. It's no surprise that this decreases toward zero.

Note, it's possible to model a sequence of independent uniforms with Lebesgue measure, but it's more complicated than what you've done. If you draw a uniform random variable as modeled by Leb$[0,1]$ and take its decimal expansion $0.d_1d_2d_3d_4\ldots$ then you can generate the sequence of numbers in $[0,1]$ $$ \begin{align}0&.d_1d_2d_4d_7\ldots\\0&.d_3d_5d_8\ldots\\0&.d_6d_9\ldots\\0&.d_{10}\ldots\\\vdots\end{align}$$ where the digits are wrapped along diagonals. This will be a sequence of indepedent uniform random variables. You could then transform each of these uniforms into an exponential. This is theoretical, not practical, though, and just a side note... I wouldn't expect you to be able to fix your first method using this transformation.

  • 0
    In my first method, I try to use the definition of convergence almost surely. That is $P(X_n(\omega)=X_\infty(\omega))=1$, so it seems we need to check the limit of $X_n(\omega)$ "pointwise". I wonder if I misunderstand it or misuse it somewhere.2017-01-27
  • 0
    @Connor Your definition of convergence a.s. is not the problem. It's that you are not modeling a sequence of independent variables. You are modeling a single uniform and then taking a sequence of functions of it. Thus you have a sequence of very dependent exponentials.2017-01-27
1

Firstly, according your assumption, the sequence $\{X_i,i\ge1\}$ is not distributed identically. For independent distributed sequence, using Borel-Cantelli Lemma to conclude its not convergence is correct. But for the first method, put all RV on $\Omega=[0,1]$, you didn't prove the independence of RVs $X_i$.