7
$\begingroup$

Let $B_{t}$ be a Brownian motion relative to a filtration $F_{t}$, is $(B_{t}+t)^{2}$ a Markov process? Thanks!

  • 0
    @George $X$ is Markov in the usual sense, see my comment to @dtbkelly.2011-03-26

3 Answers 3

11

There are two possible filtrations here -- the original filtration $\mathcal{F}_t$ generated by the Brownian motion and the one generated by the process $X$, which I'll denote by $\mathcal{F}^X_t$. So, there are two ways to interpret this question, (i) is $X$ Markov with respect to $\mathcal{F}_t$, and (ii) is $X$ Markov with respect to its own filtration $\mathcal{F}^X_t$? Unsurprisingly, the answer to (i) is no, $X$ is not Markov. To see this, it is straightforward to compute $\mathbb{E}[X_t\vert\mathcal{F}_s]$ for times $s < t$, and you get $ \mathbb{E}[X_t\vert\mathcal{F}_s]=(B_s+t)^2+t-s=X_s + 2(t-s)B_s +(t-s)^2+t-s. $ Due to the dependence on $B_s$, which is not uniquely detemined by $X_s$, this is not a function of $X_s$, so the process is not Markov (which is Byron's point in his answer).

The second question (ii) is a bit harder because computing the distribution of $X_t$ conditioned on $\mathcal{F}^X_s$ is tricky. The, perhaps surprising, answer to (ii) is yes, $X$ is Markov with respect to its own filtration! To prove this, it is necessary to show that $\mathbb{E}[f(X_t)\mid\mathcal{F}^X_s]$ is a function of $X_s$ (for times $s < t$ and bounded measurable function $f$). As a first step, note that, as the Brownian motion $B$ is Markov, $\mathbb{E}[f(X_t)\mid\mathcal{F}_s]=g(B_s)$ for a measurable function $g$. Apply the tower law for conditional expectations, $ \mathbb{E}[f(X_t)\mid\mathcal{F}^X_s]=\mathbb{E}[\mathbb{E}[f(X_t)\mid\mathcal{F}_s]\mid\mathcal{F}^X_s]=\mathbb{E}[g(B_s)\mid\mathcal{F}^X_s]. $ So, to show that $X$ is Markov under its own filtration we only have to show that the distribution of $B_s$ conditioned on $\mathcal{F}^X_s$ depends only on $X_s$. Also, given $X_s$ then $B_s$ can only take one of the two possible values $-s\pm\sqrt{X_s}$. We have to show that the probability of these two possibilities does not depend on the history of $X$. There are two main ways that I can think of showing this.

Direct Computation

We can directly compute the distribution of $B_s$ conditioned on $\mathcal{F}^X_s$ by breaking the time interval $[0,s]$ into $n$ discrete steps, which allows it to be computed as a ratio of probability density functions, then take the limit $n\to\infty$. Writing $h=s/n$ and $\delta w_k\equiv w_k-w_{k-1}$, the probability density function of $\hat B\equiv(B_{h},B_{2h},\ldots,B_{nh})$ can be written out by applying the independent Gaussian increments property as $ p(w)=(2\pi h)^{-\frac{n}{2}}\exp\left(\frac{-1}{2h}\sum_{k=1}^n(\delta w_k)^2\right). $ The distribution of $B_s$ conditional on $\hat X\equiv(X_h,X_{2h},\cdots,X_{nh})$ is simply given by a ratio of sums over the probability density function $p$, $ \mathbb{P}\left(B_s=-s+\sqrt{X_s}\;\Big\vert\;\hat X\right)=\frac{\sum_{w\in P}p(w)}{\sum_{w\in P}p(w)+\sum_{w\in P^\prime}p(w)}. $ Here, $P$ is the set of discrete paths for $\hat B$ agreeing with the values of $X$ and ending at $-s+\sqrt{X_s}$. Then, $P^\prime$ is the similar set of paths ending at $-s-\sqrt{X_s}$. If, as $w$ runs through $P$, we set $w^\prime_k\equiv-2kh-w_k$, then it can be seen that $w^\prime$ runs through $P^\prime$. So $\delta w^\prime_k=-2h-\delta w_k$ and, $ \begin{align} \sum_{w\in P^\prime}p(w)&=\sum_{w\in P}p(w^\prime)=\sum_{w\in P}(2\pi h)^{-\frac{n}{2}}\exp\left(\frac{-1}{2h}\sum_{k=1}^n(-2h-\delta w_k)^2\right)\\ &=\sum_{w\in P}p(w)\exp\left(-2nh-2w_n\right) \end{align} $ As $2nh+2w_n=2\sqrt{X_s}$, this can be plugged into the expression above, $ \mathbb{P}\left(B_s=-s+\sqrt{X_s}\;\Big\vert\;\hat X\right)=\frac{1}{1+e^{-2\sqrt{X_s}}}. $ This only depends on $X_s$ and, letting $n$ go to infinity, this expression also holds for the probability conditioned on $\mathcal{F}^X_s$.

Girsanov Transformations

The theory of Girsanov transformations tells us that, defining $U=\exp(-B_s-\frac12s)$ and the new measure $\mathbb{Q}=U\cdot\mathbb{P}$, then $\tilde B_u\equiv B_u+u$ is a standard $\mathbb{Q}$-Brownian motion on the interval $[0,s]$. Also write $V=U^{-1}=\exp(\tilde B_s-\frac12s)$, so that $\mathbb{P}=V\cdot\mathbb{Q}$. Under the measure $\mathbb{Q}$, symmetry on reflecting the Brownian motion about zero shows that $\tilde B_s$ takes the values $\pm\sqrt{X_s}$ each with probability 1/2, when conditioned on $\mathcal{F}^X_s$. The conditional expectation under the $\mathbb{P}$ measure can be converted to a conditional expectation under $\mathbb{Q}$, $ \begin{align} \mathbb{E}[g(B_s)\mid\mathcal{F}^X_s]&=\mathbb{E}_{\mathbb{Q}}[Vg(-s+\tilde B_s)\mid\mathcal{F}^X_s]/\mathbb{E}_{\mathbb{Q}}[V\mid\mathcal{F}^X_s]\\ &=\left(e^{\sqrt{X_s}}g(-s+\sqrt{X_s})+e^{-\sqrt{X_s}}g(-s-\sqrt{X_s})\right)/(e^{\sqrt{X_s}}+e^{-\sqrt{X}_s}) \end{align} $ This is a function of $X_s$, so $X$ is Markov. This method works because the change of measure "adding" a constrant drift to a Brownian motion only depends on the value of the the process at the end of the time interval, and is otherwise independent of the path taken.

3

Hint: If the process were Markov, then for $0< s\leq t$ we'd have $\mathbb{E}((B_t+t)^2 | F_s) = P_{s,t}((B_s+s)^2).$

Calculate the conditional expectation on the left and try to write it as a function of $(B_s+s)^2$.

  • 0
    Hi user7762, How do you derive $(t-s)+(B_{s}+t)^{2}$, for the LHS ?2011-03-22
0

If $X_t:=(B_t+t)^2$ were Markovian, then you could (at very least) determine the distribution of $X_t$ given $\{X_r\}_{r \leq s}$ using only the knowledge of $X_s$.

If you were given $B_s$ then this would certainly be enough to find the desired distribution, but crucially for a given $X_s$ there are two possible values for $B_s$. And these two values result in different distributions for $X_t$, which contradicts the process being Markovian.

The calculation would be something like this. Let $(B_s+s)^2 = a$ so that $B_s = -s \pm \sqrt{a}$. But then $(B_t+t)^2 = (B_t -B_s +(t-s) + (B_s +s))^2$ $ = (B_t -B_s +(t-s))^2+2(B_t -B_s +(t-s))(\pm \sqrt{a}) + a^2 $ The distribution of $B_t-B_s$ is of course known (it is a Brownian increment). But to uniquely determine the distribution of the above expression, you need to know the value of $\pm \sqrt{a}$, indeed, the two choices lead to different distributions. But this is unknown if you are only given the value of $(B_s+s)^2$.

This thread has a nice discussion about when functions of Markov processes are still Markovian.

  • 0
    @George @Byron @dtbkelly Yes, as is obvious from my comments, I am in fact rather sure that $X$ is Markov in the usual sense, that is, with respect to its own filtration (in other words, the so-called miracle happens here as well). It is elementary to write a proof of this for biased nearest neighbor random walks on $\mathbb{Z}$, and one should then be able to apply an invariance convergence result à la Donsker to solve the original question. I will try to post this.2011-03-26