1
$\begingroup$

There are some questions I am curious to make sure. If you could provide references, I will appreciate it very much.

1.X and Y are two random variables. Whether is the following deductions correct or not:

(1)$X\mathop = \limits^d 0 \Rightarrow X\mathop = \limits^{a.s.} 0$ ------I think it is correct. Where to find the proof?

(2)$X-Y\mathop = \limits^d 0 \Rightarrow X\mathop = \limits^{d} Y$ ------I think it is wrong

(3)$X-Y\mathop = \limits^{a.s.} 0 \Rightarrow X\mathop = \limits^{a.s.} Y$ ------I think it is correct

2.{$X_t(w)$} is a stochastic process, how to understand the variable $\mathop {\sup }\limits_{0 \le s \le T} {X_s}(w)$? Do I understand it right? Fixed each w (the sample path), there is a sup among $\{X_s(w),s\le T\}$. So each w correspond to a value, which forms a random variable.

3.If {$X_t(w)$} is an adapted process, for $s, how to understand the formula $E({X_t}|{F_s})={X_s}$, a.s.

4.what's the relation between $\mathop {\sup }\limits_{0 \le s \le T} {X_s^p}(w)$ and $[\mathop {\sup }\limits_{0 \le s \le T} {X_s}(w)]^p$, for positive p and positive process $\{X\}$.

  • 1
    these seem kind of independent, i would consider reposting as separate questions...2012-11-01

1 Answers 1

2

1.1) I take it that $X=0$ in distribution means that $P_X=\delta_0$, where $\delta_0$ is the Dirac measure at $0$. If this is the case, then $P_X(\{0\})=\delta_0(\{0\})=1$, and hence $P(X=0)=1$. Another approach is to note that $X=0$ in distribution implies $P(|X|\leq x)=P(|0|\leq x)=1$ for every $x>0$ hence $X=0$ a.s.

1.2) If $X−Y=0$ in distribution then by 1.1 we have $X−Y=0$ a.s. and hence $X=Y$ a.s. (pure logic) and hence $X=Y$ in distribution (always true). The converse is not true. A counterexample being the following: consider $X$ uniform on $(−1,1)$ and $Y=−X$. (Thanks to did).

1.3) Yes, $X-Y=0$ a.s. means simply that $P(X-Y=0)=1$. But this is exactly the same as $P(X=Y)=1$ and hence $X=Y$ a.s.

2) If $(X_t)_{t\geq 0}$ is a stochastic process, then $\sup_{0\leq s\leq T}X_s$ is the random variable given by the pointwise identity $ \Big(\sup_{0\leq s\leq T}X_s \Big)(\omega)=\sup_{0\leq s\leq T}X_s(\omega)=\sup \{X_s(\omega)\mid 0\leq s\leq T\}. $ 3) The conditional expectation $E[X_t\mid\mathcal{F}_s]$ is a random variable, and hence the statement that $E[X_t\mid \mathcal{F}_s]=X_s$ means that for almost all $\omega$ we have $E[X_t\mid\mathcal{F}_s](\omega)=X_s(\omega)$. For the conditional expectation to exist you must have some kind of integrability assumption on $(X_t)_{t\geq 0}$. An integrable adapted process $(X_t)_{t\geq 0}$ that satisfies $E[X_t\mid\mathcal{F}_s]=X_s$ a.s. for all $0\leq s is called a martingale. When $\mathcal{F}_s$ is interpreted as the information available at time $s$, then the martingale property states that the best estimate of $X_t$ given the information available at time $s$ (this is $E[X_t\mid \mathcal{F}_s]$) is the variable $X_s$ itself.

4) The two would be the same since $x\mapsto x^p$ is continuous and non-decreasing for $p>0$. In general, if $f\! : \mathbb{R}\to\mathbb{R}$ is a continuous and non-decreasing function and $(x_s)_{0\leq s\leq T}\subseteq \mathbb{R}$, then $ \sup_{0\leq s\leq T}f(x_s)\leq f\left(\sup_{0\leq s\leq T} x_s\right) $ because $f$ is non-decreasing. Now we use the following property of the supremum: There exists $(s_n)_{n\in\mathbb{N}}\subseteq [0,T]$ such that $ \sup_{0\leq s\leq T}x_s =\lim_{n\to\infty} x_{s_n}. $ By continuity we have that $ f\left(\sup_{0\leq s\leq T} x_s\right)=f\left(\lim_{n\to\infty} x_{s_n}\right)=\lim_{n\to\infty} f(x_{s_n})\leq \sup_{0\leq s\leq T}f(x_s), $ and hence $ f\left(\sup_{0\leq s\leq T} x_s\right)=\sup_{0\leq s\leq T}f(x_s). $

  • 1
    @Stefan@did: Thank for your help. My question is solved, during which I have learned much. I enjoy stackexchange.com, I enjoy you kindly guys!2012-11-02