im work with the following definitions
if $\{a_n\}$ is a sequence of r.v. and $g(n)$ a real-valued function of the positive integer argument $n$, then the notation $a_n = \mathcal{o}_p(g(n))$ means that
$p\lim_{n \rightarrow \infty} \left(\frac{a_n}{g(n)}\right)=0$
Similarly, the notation $a_n = \mathcal{O}_p(g(n))$ means that there is a constant $K$ sucht that, for all $\epsilon>0$, there is a positive integer $N$ such that
$Pr\left(|\frac{a_n}{g(n)}|>K\right) < \epsilon \ \ \ $ for all $n>N$
Now my problem which i dont quite understand:
i look at a sequence of random variables $\{x_n\}$ sucht that $x_n$ is distributed as $N(0,n^{-1})$. It is easy to see that $x_n$ has the c.d.f $\Phi(n^{\frac{1}{2}}x)$ i.e. that $Pr(x_n < x ) = \Phi(n^{\frac{1}{2}}x)$.
But why is $n^{\frac{1}{2}}x_n$ a $\mathcal{O}_p(1)$?
My understanding of the given definition regarding $\mathcal{O}_p(.)$ is that i have to find a constant K such that for $g(n) = n^0 = 1 $ the probability of $|n^{\frac{1}{2}}x_n|$ being greater then K is smaller then $\epsilon$.
Since $n^{\frac{1}{2}}x_n$ is $N(0,1)$ distributed i really dont see why this is a $\mathcal{O}_p(1)$. If n gets large the probability for large K is small and to write down a limit distribution for $n^{\frac{1}{2}}x_n$ is also possible but i dont see how this would help? Or is it enough to find any sort of K for which the probablilty is smaller then $\epsilon$ even if K is very large?
Thanks Tim