2
$\begingroup$

There is a purely probability theoretical argument in the proof of Lévy's characterization of Brownian motion, which I do not completely understand. I think it is rather easy. Suppose we know $E[e^{iu^{tr}(X_t-X_s)}|\mathcal{F}_s]=e^{-\frac{1}{2}|u|^2(t-s)}$

for all $u\in\mathbb{R}^d$. From this it should follow, that $X_t-X_s$ is independent of $\mathcal{F}_s$ and normally distributed with mean $0$ and covariance matrix $(t-s)Id_{d\times d}$, hence the $X^k$ should be independent Brownian Motion ($^k$ denotes the k-th coordinate). My suggestion is to take expectation:

$E[e^{iu^{tr}(X_t-X_s)}]=e^{-\frac{1}{2}|u|^2(t-s)}$

Hence I know that $(X_t-X_s)$ has the right distribution. Furthermore, by the structure of the convariance matrix, I know $(X^i_t-X_s^i)(X^k_t-X^k_t)$ are uncorrelated for $k\not=i$. Why should the $X^k$ be independent. I have independence of the product of increments, how do I get independence for $X^k$? I guess, this uses, that every coordinate is a normal distributed r.v. and for normal distributed r.v. "uncorrelated implies independent". Even more, I do not see how independence of $\mathcal{F}_s$ should follow. So any help would be appreciated. Thanks in advance!

math

  • 0
    @did: You're right, I should write jointly normal.2012-07-17

2 Answers 2

2

A very useful fact here is the following:

Lemma. Let $X,Y$ be random vectors, and consider the joint characteristic function $\phi(u,v) = E[e^{i u \cdot X + i v \cdot Y}]$. If we can factor $\phi$ as $\phi(u,v) = f(u) g(v)$, then $X,Y$ are independent.

Proof. Without loss of generality, assume $f(0) = g(0) = 1$. Then we see that $f(u) = \phi(u,0)$ is the chf of $X$, and likewise $g$ is the chf of $Y$. Thus $\phi$ is the joint chf of $(\tilde{X}, \tilde{Y})$ where $\tilde{X} \overset{d}{=} X$, $\tilde{Y} \overset{d}{=} Y$, and $\tilde{X}, \tilde{Y}$ are independent. Since the joint chf uniquely determines the joint distribution we have $(X,Y) \overset{d}{=} (\tilde{X}, \tilde{Y})$, which means $X,Y$ are independent.

Corollary. Let $X$ be a random vector, $\mathcal{G}$ a $\sigma$-field, and suppose the conditional chf $f(u) = E[e^{i u \cdot X} \mid \mathcal{G}]$ is deterministic. Then $X$ is independent of $\mathcal{G}$.

Proof. Let $Y$ be any $\mathcal{G}$-measurable random vector, with $g$ its chf. We compute the joint chf of $X,Y$: $E[e^{i u \cdot X + iv \cdot Y}] = E[e^{i v \cdot Y} E[e^{i u \cdot X} \mid \mathcal{G}]] = E[e^{i v \cdot Y} f(u)] = g(v) f(u).$ By the previous lemma, $X,Y$ are independent. $Y \in \mathcal{G}$ was arbitrary, so $X$ must be independent of $\mathcal{G}$.

Given what you know, this corollary tells us that $X_t - X_s$ is independent of $\mathcal{F}_s$.

  • 0
    Eldrege : Thanks for your answer! This is exactly what I was looking for. If you would add an argument why $X_t^k$ and $X_t^l$ are independent, I will accept your answer. However, I'm very thankful for your help!2012-07-21
0

The vector of increments are uncorrelated and the increments are multivariate normal and the covariance matrix for the increments is (t-s)I where I is a dxd identity matrix. So these coordinates are uncorrelated and normally distributed hence independent. Now if the increments from one coordinate are independent from the increments of the other by summing you will get that the coordinate sums are independent normal. summing the increments gives Xt$^k$ and Xt$^l$ for k not equal to l.

  • 0
    Not obvious! Xt$^k$= (Xt$^k$-Xt-1$^k$)+(Xt-1$^k$ -Xt-2$^k$)+ ... and the same telescoping sum works for Xt$^l$.2012-07-17