I need to find an example of an Ito process $X=\{X_t:t\in[0,T]\}$ with non-zero Ito integral part and a sequence of Ito processes $\{X_n\}$ such that $X_n$ converges uniformly to $X$, as $n\to\infty$, and $[X_n]_T\to \frac{1}{2}[X]_T$, as $n\to\infty$. Here $[X]$ denotes the quadratic variation process of an Ito process $X$. I have no idea how to start on this one so I would like a hint on a direction.
Convergence of quadratic variation of Ito processes
1 Answers
Fix $a$ in $(0,1)$ and a standard Brownian motion $X$. For every subdivision $\sigma=(t_k)_{0\leqslant k\leqslant n}$ of $[0,T]$, deterministic or random, consider the diffusion $X^\sigma$ defined by $X^\sigma_0=0$ and $$ \mathrm dX^\sigma_t=a\mathrm dX_t+(1-a)b_\sigma(t)\mathrm dt, $$ with $$ b_\sigma(t)=\sum\limits_{k=1}^n\frac{X_{t_k}-X_{t_{k-1}}}{t_k-t_{k-1}}\cdot[t_{k-1}\leqslant t\lt t_k]. $$ In words $X^\sigma_{t}=X_{t}$ for every $t$ in $\sigma$ and, on every interval $[t_{k-1},t_k]$ of $\sigma$, a proportion $1-a$ of the variation of $X$ is transformed into a drift which performs a linear interpolation from $X_{t_{k-1}}$ to $X_{t_k}$. In particular, for every $t$ in $[0,T]$, $$ [X^\sigma]_t=a^2[X]_t=a^2t. $$ To estimate the uniform norm of $X^\sigma-X$ on $[0,T]$, first note that this is the maximum of the uniform norms of $X^\sigma-X$ on the intervals of $\sigma$ and that the distribution of these depends only on the length of the interval. The uniform norm on an interval of length $s$ is distributed like $(1-a)M_s$ with $$ M_s=\max\limits_{0\leqslant t\leqslant s}|X_t-(t/s)X_s|. $$ Fix a positive $x$. Then $\mathrm P(M_s\geqslant x)$ behaves roughly like $\mathrm e^{-x^2/(2s)}$ when $s\to0^+$.
Consider a subdivision $\sigma$ with $n$ points and with step $s$. By the union bound, the uniform norm of $X^\sigma-X$ on $[0,T]$ is at least $(1-a)x$ with probability at most $n\mathrm e^{-x^2/(2s)}$. For the regular subdivision $\sigma_n$ with $n$ intervals, $s=T/n$ hence the probability is at most $n\mathrm e^{-nx^2/(2T)}$.
The series $\sum\limits_nn\mathrm e^{-cn}$ is summable for every $c\gt0$ hence, for every $x\gt0$, $\|X-X^{\sigma_n}\|\leqslant x$ for every $n$ large enough, almost surely. This proves that $X^{\sigma_n}\to X$ uniformly, almost surely.
-
0I assume that the expression in the brackets in the definition of $b_{\sigma}$ mean an indicator function. I get that $X^{\sigma}_{t_k}=X_{t_k}-(1-a)X_{t_{k-1}}$, or am I wrong? This is an interesting example, but maybe one can use some fact to construct a more straightforward one? – 2012-03-18
-
0Yes, this is [Iverson bracket](http://en.wikipedia.org/wiki/Iverson_bracket). // No, $X^\sigma_t=X_t$ for every $t$ in $\sigma$. – 2012-03-18
-
0Sorry, I still don't get it. This is how i proceed: let $t_k\in\sigma$, then $$X^{\sigma}_{t_k}=aX_{t_k}+(1-a)\int^{t_k}_{t_{k-1}}\frac{X_{t_k}-X_{t_{k-1}}}{t_k-t_{k-1}}ds=X_{t_k}-(1-a)X_{t_{k-1}}.$$ Where is my mistake? – 2012-03-18
-
0In the fact that the integral should start at $s=0$ and not at $s=t_{k-1}$, yielding the value $\sum\limits_{i=1}^k(X_{t_i}-X_{t_{i-1}})=X_{t_k}$. – 2012-03-18
-
0Thank you, I mistakenly kept only one member of the sum. – 2012-03-18