7
$\begingroup$

My friend and I have been stumped on this problem for a little while and I thought asking for tips couldn't hurt (we did ask the teacher, but we got other problems after)

Here is the question :

Let $\{X_n\}_{n \geq 1}$ be a sequence of random variables defined on the same probability space $(\Omega, F, \mathbb{P})$ with the same law of finite expected value (E(|X_1|)<\infty ). Let

$Y_n = n^{-1} \max_{1 \leq i \leq n} |X_i|$.

Show that

$\lim_{n\rightarrow \infty} E(Y_n) = 0$

and

$Y_n \rightarrow 0$ almost surely.

We have ideas of many parts of the proof, for example for the first one it would suffice to show that the expected value of the max of all $|X_i|$ is finite... and since the max is one of the $|X_i|$ for each $\omega \in \Omega$ it seems reasonable but we're not sure how to show it.

We also tried splitting the integral for the expected value into a partition of $\Omega$ considering the sets on which $X_i$ is the max, but didn't get too far with that.

For the second part, I think we could show it if we knew that $X_i(\omega)$ diverges for only a measure 0 set, but it's not that obvious (I think).

Any pointers to the right direction appreciated!

  • 0
    It's been a *long* time since I had anything to do with random variables and the like, but I can't see how this isn't just plain wrong? E.g. take $\Omega = \{\omega\}$ (IOW a singleton) and $X_n(\omega) = n^2$2011-03-17
  • 2
    @kahen The random variables $X_n$ are supposed to be identically distributed.2011-03-17

2 Answers 2

5

Assume without loss of generality that $X_1$ is almost surely nonnegative.

Almost sure convergence

This is a consequence of the first Borel-Cantelli lemma. To see this, fix any positive $x$. Then $P(X_n\ge nx)=P(X_1\ge nx)$ for every $n$ and $$\sum_{n\ge1}P(X_1\ge nx)\le x^{-1}E(X_1), $$ hence the series of general term $P(X_n\ge nx)$ converges. By the first Borel-Cantelli lemma, the limsup of the events $[X_n\ge nx]$ has probability $0$. This means that $X_n for every $n$ large enough and can be translated as $X_n\le Z+nx$ for every $n$, for an almost surely finite $Z$. Hence $nY_n\le Z+nx$ for every $n$, and $\limsup Y_n\le x$ almost surely. This holds for every positive $x$, hence $Y_n\to0$ almost surely.

Convergence of the expectations

Two ingredients are useful here: the fact that, for every nonnegative $Z$, $E(Z)$ is the integral of the function $x\mapsto P(Z\ge x)$ over $x\ge0$, and the fact that if $nY_n\ge x$, then $X_k\ge x$ for at least one integer $k$ between $1$ and $n$, hence $P(nY_n\ge x)\le nP(X_1\ge x)$.

Thanks to the first ingredient, $E(Y_n)$ is the integral of $g_n$ with $g_n(x)=P(nY_n\ge x)/n$. Thanks to the second ingredient, $g_n(x)\le P(X_1\ge x)=g_1(x)$. Now, $g_n(x)\le1/n$ hence $g_n(x)\to0$, and since $E(X_1)$ is finite, $g_1$ is integrable. By dominated convergence, the integral of $g_n$ converges to $0$, that is, $E(Y_n)\to0$.

Remark You do not say where you found the exercise but your source is to be complimented because many people add the hypothesis that the sequence $(X_n)$ is independent although it is not necessary.

Added later on The upper bound of a series by $x^{-1}E(X_1)$ used above can be proved as follows. First assume that $x=1$ and note that for any nonnegative $Z$ (random or deterministic), $$ \sum_{n\ge1}\mathbf{1}_{Z\ge n}=\lfloor Z\rfloor\le Z, $$ where $\lfloor \ \rfloor$ denotes the integer part. Integrating both sides of the inequality with respect to $P$ yields, for any nonnegative random variable $Z$, $$ \sum_{n\ge1}P(Z\ge n)=E(\lfloor Z\rfloor)\le E(Z). $$ For the case at hand, apply this inequality to the random variable $Z=x^{-1}X_1$, using $$ \sum_{n\ge1}\mathbf{1}_{X_1\ge nx}=\lfloor x^{-1}X_1\rfloor\le x^{-1}X_1. $$

  • 0
    I am not sure about the first inequality... could there by any chance be a confusion in the multiple "n" present in the solution? By the way, thank you very much!2011-03-18
  • 0
    @Vhailor, for a *nonnegative* random variable $X$ with $G(x) = \mathbb{P}(X > x)$ it is a fact that $\mathbb{E}(X) = \int_0^\infty G(x) \,\mathrm{d}x$. Now $s_m = \sum_{n=1}^m P(X_n > n x) \leq \int_0^m G(ux) \mathrm{d}u = x^{-1} \int_0^{m x} G(v) \mathrm{d}v$ where the inequality follows since $G$ is nonincreasing. So, $s_m \leq x^{-1} \mathbb{E} X$ for all $m$. Hence $\limsup_m s_m < \infty$, which by Borel--Cantelli let's one conclude that $\mathbb{P}(X_n > n x \; \mathrm{ i.o.}) = 0$. Does that help?2011-03-18
  • 0
    @Vhailor No confusion. I added a proof of the inequality at the end of my post. @cardinal Thanks.2011-03-18
  • 0
    I am a little confused of your construction of Z. Would you explain more? Can it be constructed explicitly?2017-02-05
  • 0
    @Syl.Qiu This is a deterministic result: assume some real valued sequence $(x_n)$ is such that $x_n for some positive $x$, for every $n>N$, then there exists some finite $z$ such that $x_n for every $n$. Any idea to show this?2017-02-05
  • 0
    @Did Ah yes I can take maximum of $(x_1,\dots,x_N)$, and since $EX_1<+\infty$, $Z$ defined as the pointwise maximum will be a.s. finite.2017-02-05
  • 0
    @Syl.Qiu Finite expectation is not needed for almost sure finiteness but yes, this is roughly the idea.2017-02-05
  • 0
    @Did Thanks! But may I ask why it is not needed? I had used the a.s finiteness of $X_1,...,X_N$ to get $Z$ a.s. finiteness.2017-02-05
  • 0
    @Syl.Qiu Yes, you used the almost sure finiteness, not the (strictly stronger hypothesis of) integrability.2017-02-05
1

Note first that $E \max_{i \ge 0} |X_i|$ (one should really write $\sup$ instead of $\max$) need not be finite. Indeed, if the $X_i$ are, say, iid normal, then one can show $\sup_{i \ge 0} |X_i| = +\infty$ almost surely.

To get $L^1$ convergence, the trick is to split into the events where the $X_i$ are small and where they are large. When they are small they do not contribute much to $Y_n$, and they can be large only with small probability. So fix $M$ and let $U_i = |X_i| 1_{\{|X_i| \le M\}}$, $V_i = |X_i| 1_{\{|X_i| > M\}}$. Then $Y_n \le \frac{1}{n} (\max_{i \le n} U_i + \max_{i \le n} V_i)$. The first term is bounded by $M$, and the second by $V_1 + \dots + V_n$. Taking expectations, $E Y_n \le \frac{M}{n} + E V_1$, so $\limsup_{n \to \infty} E Y_n \le E V_1$. By choosing $M$ large enough, $E V_1$ can be made as small as desired (think dominated convergence).

For almost sure convergence, the argument that went here previously was wrong.