Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = \int_0^t f(X_s) ds $. Is it true in general that $ \frac{d} {dt} \mathbb{E}(Y_t) = \mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator?
When can we interchange the derivative with an expectation?
-
0@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards. – 2012-10-22
-
2Where could I find information about when such an operation is ok? – 2013-12-04
-
7A sufficient condition is that $$E\left(\int_0^tf(X_s)ds\right)=\int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$\int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini. – 2016-10-26
1 Answers
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $X\in\mathcal{X}$ be a random variable $g\colon \mathbb{R}\times \mathcal{X} \to \mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|\frac{\partial}{\partial t} g(t, X)| \leq Z$ a.s. for all $t$ and $\mathbb{E}(Z) < \infty$. Then $$\frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\bigl(\frac{\partial}{\partial t} g(t, X)\bigr).$$
Proof. We have $$\begin{align*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) &= \lim_{h\to 0} \frac1h \Bigl( \mathbb{E}\bigl(g(t+h, X)\bigr) - \mathbb{E}\bigl(g(t, X)\bigr) \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{g(t+h, X) - g(t, X)}{h} \Bigr) \\ &= \lim_{h\to 0} \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(\tau(h), X) \Bigr), \end{align*}$$ where $\tau(h) \in (t, t+h)$ exists by the mean value theorem. By assumption we have $$\Bigl| \frac{\partial}{\partial t} g(\tau(h), X) \Bigr| \leq Z$$ and thus we can use the dominated convergence theorem to conclude $$\begin{equation*} \frac{\partial}{\partial t} \mathbb{E}\bigl(g(t, X)\bigr) = \mathbb{E}\Bigl( \lim_{h\to 0} \frac{\partial}{\partial t} g(\tau(h), X) \Bigr) = \mathbb{E}\Bigl( \frac{\partial}{\partial t} g(t, X) \Bigr). \end{equation*}$$ This completes the proof.
In your case you would have $g(t, X) = \int_0^t f(X_s) \,ds$ and a sufficient condition to obtain $\frac{d}{dt} \mathbb{E}(Y_t) = \mathbb{E}\bigl(f(X_t)\bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^\ast$, boundedness of the derivative is only required in a neighbourhood of $t^\ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
-
0The uniform boundedness of $f$ seems to be a much too restrictive condition. – 2016-10-26
-
0@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this? – 2016-10-26
-
0@Did ah, yes, your Fubini solution is more elegant. – 2016-10-27
-
1@ jochen except $\int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(. – 2017-08-13
-
0@batman why not? You can have $X \in C\bigl( [0,\infty), \mathbb{R} \bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$. – 2017-08-14
-
0@jochen you seem to be describing a random function rather than a random variable. Random variable has one value per path. – 2017-08-14