1
$\begingroup$

im work with the following definitions


if $\{a_n\}$ is a sequence of r.v. and $g(n)$ a real-valued function of the positive integer argument $n$, then the notation $a_n = \mathcal{o}_p(g(n))$ means that

$p\lim_{n \rightarrow \infty} \left(\frac{a_n}{g(n)}\right)=0$

Similarly, the notation $a_n = \mathcal{O}_p(g(n))$ means that there is a constant $K$ sucht that, for all $\epsilon>0$, there is a positive integer $N$ such that

$Pr\left(|\frac{a_n}{g(n)}|>K\right) < \epsilon \ \ \ $ for all $n>N$


Now my problem which i dont quite understand:

i look at a sequence of random variables $\{x_n\}$ sucht that $x_n$ is distributed as $N(0,n^{-1})$. It is easy to see that $x_n$ has the c.d.f $\Phi(n^{\frac{1}{2}}x)$ i.e. that $Pr(x_n < x ) = \Phi(n^{\frac{1}{2}}x)$.

But why is $n^{\frac{1}{2}}x_n$ a $\mathcal{O}_p(1)$?

My understanding of the given definition regarding $\mathcal{O}_p(.)$ is that i have to find a constant K such that for $g(n) = n^0 = 1 $ the probability of $|n^{\frac{1}{2}}x_n|$ being greater then K is smaller then $\epsilon$.

Since $n^{\frac{1}{2}}x_n$ is $N(0,1)$ distributed i really dont see why this is a $\mathcal{O}_p(1)$. If n gets large the probability for large K is small and to write down a limit distribution for $n^{\frac{1}{2}}x_n$ is also possible but i dont see how this would help? Or is it enough to find any sort of K for which the probablilty is smaller then $\epsilon$ even if K is very large?

Thanks Tim

1 Answers 1

0

Let $X$ be any $N(0,1)$ random variable. You know that for each $\epsilon>0$ there is a $K_\epsilon$ such that $\operatorname{Pr}(|X|>K_\epsilon)<\epsilon$. Now each of your random variables $n^{1/2}x_n$ is $N(0,1)$, so for every $n$ you have $\operatorname{Pr}\left(\left|n^{1/2}x_n\right|>K_\epsilon\right)<\epsilon\;.\tag{1}$ In other words, you can take $N=1$, and $(1)$ will be true for all $n>N$, because in fact it’s true for all $n$.

The point is that since your variables are identically distributed, a $K_\epsilon$ that works for one of them automatically works for all of them: it’s independent of $n$. It’s perfectly true that if $\epsilon=10^{-10^{10}}$, say, $K_\epsilon$ will be fairly large, but that’s perfectly acceptable: the definition merely requires that for each $\epsilon$, no matter how small, there be some $K_\epsilon$ that ‘works’, even if it’s huge.

  • 0
    ok thx very much - in my example the distribution is rather simple :-)2012-09-30