5
$\begingroup$

Consider a continuous time real-valued Markov process $X_t$ given by an SDE: $ dX_t = \mu(X_t)dt+\sigma (X_t)dW_t. $ Let $\mu,\sigma\in C^1(\mathbb R)$ and $\sigma\ge0$. Moreover let us assume that $\mu,\sigma$ are such that there exists a unique solution for any initial value $X_0 = x$.

I guess that if for some $x\in \mathbb R$ we have $\sigma(x)>0$ then there exist such a neighborhood $U(x)$ that for all $y\in U(x)$ and for any neighborhood $U(y)$ there exists t'>0 such that $\mathsf P\{X_t \in U(y)|X_0 = x\}>0$ for all 0. Does anybody have an idea how to prove it?

  • 0
    @George: thank you very much, it will be awesome.2011-07-30

3 Answers 3

4

Yes, your guess is correct, but we can say a lot more than that. If $\sigma,\mu$ are continuous, $X_0=x$ and $\sigma > 0$ on some connected neighbourhood $U$ of $x$, then $\mathbb{P}(X_t\in V) > 0$ for all positive times $t$ and nonempty open sets $V\subseteq U$. We can go even further than this though. If $\gamma\colon[0,t^\prime]\to\mathbb{R}$ is continuous with $\gamma(0)=x$ and $\sigma(\gamma(t)) > 0$, then $\mathbb{P}(\sup_{t\le t^\prime}\vert X_t-\gamma(t)\vert < \epsilon) > 0$ for all positive $\epsilon$. Stated another way, the support of the paths of $X$ (over a finite time interval) contains all continuous paths starting from $x$ along which $\sigma$ is positive. These statements also hold in the more general case of diffussions in $\mathbb{R}^n$.

In fact, it is not necessary to assume that $X$ is a diffusion at all, only that it can be expressed as a stochastic integral. That is, $\sigma,\mu$ do not have to be specified as functions of $X$. In the $n$-dimensional case, we can write $ dX^i=\sum_{j=1}^m\sigma^{ij}_t\,dW^j_t+\mu^i_t\,dt. $ Here, $X=(X^1,\ldots,X^n)$ is an $n$-dimensional process with $X_0=x\in\mathbb{R}^n$ and $W=(W^1,\ldots,W^m)$ is an $m$-dimensional Brownian motion. You can consider $\sigma^{ij}_t$ and $\mu^i_t$ to be functions of $X_t$ if you like, but that is not necessary. All that matters is that they are predictable processes (which includes all continuous and adapted processes).

First, supposing that $\mu,\sigma$ satisfy some boundedness conditions whenever $X$ is close to $x$, then there is a positive probability of $X$ remaining arbitrarily close to $x$. I will use $\Vert\cdot\Vert$ to denote the Euclidean norms on $\mathbb{R}^n$ and on the $n\times n$ matrices.

1) Suppose there exists $K > 0$ such that $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert\le K$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert\le K$ and $\Vert\sigma_t\sigma^T_t\Vert\le K$ whenever $\Vert X_t - x\Vert < \delta$ (some positive $\delta$). Then, $\mathbb{P}(\sup_{t\le t^\prime}\Vert X_t-x\Vert < \epsilon) > 0$ for all positive $\epsilon$.

In the one dimensional case, we need only suppose that $\sigma^{-2}\mu\le K$ and $\sigma^2\le K$ (there is no need to assume that $\sigma$ is bounded away from zero). I'll prove (1) in a moment. First, it has the following consequence.

2) Let $\gamma\colon[0,t^\prime]\to\mathbb{R}^n$ be continuous such that $\gamma(0)=x$ and there is $K > 0$ with $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert\le K$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert\le K$ and $\Vert\sigma_t\sigma^T_t\Vert\le K$ whenever $\Vert X_t-\gamma(t)\Vert < \delta$ (some positive $\delta$). Then, $\mathbb{P}(\sup_{t\le t^\prime}\Vert X_t-\gamma(t)\Vert < \epsilon) > 0$ for all positive $\epsilon$.

In particular, the conditions are satisfied if $\sigma_t=\sigma(X_t),\mu_t=\mu(X_t)$ are continuous functions of $X_t$ and $\sigma(\gamma_t)\sigma(\gamma_t)^T$ is nonsingular, implying the statements in the first paragraph of this post.

To see that (2) follows from (1), consider the case where $\gamma$ is continuously differentiable (this is enough, as all continuous functions can be uniformly approximated by smooth ones). Supposing that the requirements of (2) are met, look at $\tilde X_t=X_t-\gamma(t)$. This satisfies the requirements of (1) with $\mu$ replaced by $\mu-\gamma^\prime$. So, by (1), $\tilde X$ has positive probability of remaining arbitrarily close to 0, and $X$ has positive probability of remaining arbitrarily close to $\gamma$.

Now, let's prove (1). We can suppose that $\epsilon < \delta$ and, by stopping $X$ as soon as $\Vert X-x\Vert$ hits $\delta$, we can suppose that $\Vert(\sigma_t\sigma_t^T)^{-1}\Vert$, $\Vert(\sigma_t\sigma_t^T)^{-1}\mu_t\Vert$ and $\Vert\sigma_t\sigma_t^T\Vert$ are always bounded by $K$. Then, there is a predictable process $\nu$ with $\Vert\nu\Vert\le K$ and $\mu_t=\sigma_t\sigma_t^T\nu_t$. Define a new measure $\mathbb{Q}$ by the Girsanov transform $ \frac{d\mathbb{Q}}{d\mathbb{P}}=\exp\left(-\sum_{j=1}^m\int_0^{t^\prime}(\sigma^T_t\nu_t)^j\,dW^j_t-\frac12\int_0^{t^\prime}\nu_t^T\sigma_t\sigma^T_t\nu_t\,dt\right) $ This is an equivalent measure to $\mathbb{P}$ and, by the theory of Girsanov transforms, $\tilde W_t=W_t+\int_0^t\sigma^T_s\nu_s\,ds$ is a $\mathbb{Q}$-Brownian motion. As we have $ dX^i_t=\sum_{j=1}^m\sigma^{ij}_t\,d\tilde W^j_t $ this reduces the problem to the case where $\mu$ is zero. So, let's suppose that that $\mu=0$ from now on.

In the one dimensional case, where $dX_t=\sigma_t\,dW_t$, it is enough to suppose that $\sigma_t^2\le K$. This is because a stochastic time change can be used to write the local martingale $X$ as $X_t=x+B_{A_t}$ where $B$ is a Brownian motion with respect to its natural filtration and $A_t=\int_0^t\sigma_s^2\,ds\le Kt$. Then, $\sup_{t\le t^\prime}\vert X_t-x\vert < \epsilon$ whenever $\sup_{t\le Kt^\prime}\vert B_t\vert < \epsilon$. However, standard Brownian has nonzero probability of remaining within a positive distance $\epsilon$ of the origin (see the answers to this math.SE question), so $\{\sup_{t\le t^\prime}\vert X_t-x\vert < \epsilon\}$ has positive probability.

In the multidimensional case we need to also assume that $\Vert(\sigma\sigma^T)^{-1}\Vert\le K$. We can then reduce to the one-dimensional case. Setting $Y=\Vert X-x\Vert^2$, integration by parts gives $ \begin{align} dY_t&=\tilde\sigma_t\,d\tilde W+\tilde\mu_t\,dt,\\ \tilde\sigma_t&=2\sqrt{(X_t-x)^T\sigma_t\sigma_t^T(X_t-x)},\\ \tilde\mu_t&={\rm Tr}(\sigma_t\sigma^T_t),\\ \tilde W_t&=2\sum_{i,j}\int_0^t1_{\{X_s\not=x\}}\tilde\sigma_s^{-1}(X^i_s-x^i)\sigma^{ij}_s\,dW^j_s. \end{align} $ You can check that $\tilde W$ has quadratic variation $[\tilde W]_t=t$ so that, by Lévy's characterization of Brownian motion, $\tilde W$ is a standard Brownian motion. Then, $\tilde\sigma_t$, $\tilde\sigma_t^{-1}$ and $\tilde\mu_t$ are bounded for $Y_t$ lying in any given compact subset of $(0,\infty)$. If we let $\tau$ be the first time at which $Y$ hits $\epsilon^2/2$ then, applying (1) in the one-dimensional case to $Y_{\tau+t}$, there is a positive probability that $\sup_{t\le t^\prime}\vert Y_{\tau+t}-\epsilon^2/2\vert < \epsilon^2/2$. However, in this case, we have $\sup_{t\le t^\prime}\Vert X_t-x\Vert < \epsilon$.


Finally, in the $n$-dimensional case, I'll give an example to show that $X$ need not have a positive probability of remaining close to its starting point, even when $\mu=0$ and $\sigma$ is bounded. Consider the two dimensional case, $n=2$, with $m=1$ and $\sigma_t=R\hat X_t$, where $R$ is the linear map giving a rotation by 90 degrees and $\hat x\equiv1_{\{x\not=0\}}x/\Vert x\Vert$. So, $ dX_t=R\hat X_t\,dW_t $ for a Brownian motion $W$. Then, $X^T\,dX=0$ and integration by parts shows that if $X_0\not=0$ then $\Vert X\Vert$ increases deterministically, $ \begin{align} \Vert X_t\Vert^2&=\Vert X_0\Vert^2+\sum_{i=1}^2[X^i]_t=\Vert X_0\Vert^2+\int_0^t\hat X_s^TR^TR\hat X_s\,ds\\ &=\Vert X_0\Vert^2+t. \end{align} $

1

By path continuity alone, if $x$ is any starting point and $U$ is any neighborhood of $x$, then the exit time $T:=\inf\{t\ge 0: X_t\notin U\}$ is strictly positive, with probability 1. Thus for small enough $t>0$ (how small depends on both $x$ and $U$), $ P\{X_s\in U\hbox{ for all } s\in[0,t] | X_0=x\}=P\{T>t | X_0=x\}>0. $ There is no need to assume $\sigma(x)>0$.

  • 0
    I guess, you answered the first version of the question. Unfortunately, it was wrong and I've edited it 2 hours before you answer.2011-07-28
1

Let me try again, this time to answer the correct question!

If, in addition to your other hypotheses, the function $\sigma$ is everywhere strictly positive, then $X_t$ admits a (continuous, strictly positive) density function $p(t,x,y)$: $ \mathbf{P}\{X_t\in A | X_0=x\} = \int_A p_t(x,y)\,dy $ for each Borel subset $A$ (and all $x$). [See, for example, section 4.11 of the book of Ito and McKean.] This is more than enough to answer your question in the affirmative.

Suppose now that $\sigma(x)>0$ for a certain fixed $x$. Then there is an open interval $U(x)$ containing $x$ such that $\inf_{y\in U(x)}\sigma(y) >0$. The remarks of the preceding paragraph apply to the diffusion (call it $Y$) obtained by killing $X$ upon its first exit from $U(x)$. (This amounts to restricting $\sigma$ and $\mu$ to $U(x)$, employing Dirichlet boundary conditions at the endpoints of $U(x)$.) In particular, $Y_t$ admits a density function $q(t,y,z)$ (with $q(t,y,z)>0$ for all $t>0$ and all $y,z$ in $U(x)$) with respect to Lebesgue measure. We then have, for any fixed $y\in U(x)$, $ \mathbf{P}\{X_t\in A | X_0=y\} \geq \mathbf{P}\{X_t\in A, T>t | X_0=y\}, $ where $T$ is the first exit time of $X$ from $U(x)$. The right side of the above display is equal to $ \mathbf{P}\{Y_t\in A | Y_0=y\} =\int_A q(t,y,z)\,dz. $ As before, this implies a positive answer to your question in the general case.

  • 0
    The book I cited (Ito & McKean) deals with very general diffusions with state space allowed to be any subinterval of the real line.2011-07-29