My question is about the usage of Taylor expansions when dealing with
asymptotics in local polynomial fitting.
The expressions that set me of are of the type:
$$
g(X_i) = \sum_{j=0}^{p} \frac{g^{(j)}(x_0)}{j!}(X_i-x_0)^j +
o_p((X_i-x_0)^p),
$$
where $x_0$ is fixed and $X_i$ is a random design point.
I have two questions concerning these type of expressions.
First, the $o_p$ symbol stands (as far as I understand) for convergence in
probability to 0, but there is no sequence to converge. There in fact is only 1 random variable $X_i$ ($i$ is
a fixed index, in a sample of e.g. size $n$).
So my question is how to make sense out of this?
My second question concerns the order of the remainder term.
The above fits to the real analysis intuition where we have
$$
f(x) = \sum_{j=0}^{p} \frac{f^{(j)}(x_0)}{j!}(x-x_0)^j +
o(|x-x_0|^p),
$$
for $x\to x_0$.
However, e.g. in Brockwell, Davis:
Time Series: Theory and Methods (1987), chapter 6 - asymptotic theory, where
``Taylor Expansion in Probability'' is treated (p. 194/195) we have
$$
g(X_n) = \sum_{j=0}^{p-1} \frac{g^{(j)}(x_0)}{j!}(X_n-x_0)^j +
o_p(r_n^p),
$$
with $\{X_n\}$ a sequence of rv's such that $X_n = x_0 + O_p(r_n)$, and
$0
Local polynomial fitting using Taylor expansion
2
$\begingroup$
asymptotics