2
$\begingroup$

"To every Q-matrix $q$ corresponds a unique Markov process." I'm trying to understand Klenke's proof of the "existence" part of this proposition, namely that given a Q-Matrix $q$, there exists a Markov process $\mathfrak{X}$, whose Q-Matrix is $q$. (Theorem 17.25, see below)

Listed below is the beginning of his proof (the rest, which is irrelevant to my current question, can be found here). What i fail to figure out is how to show that $\mathfrak{X}$ (defined in the proof below) is a Markov process. I recalled Klenke's definition of a Markov process in a previous post.


Relevant definitions

  1. Let $E$ be a non-empty, countable set.

  2. A Q-Matrix in $E$ is a function $q:E\times E\rightarrow\mathbb{R}$ such that

    i) $q(e,d)\geq0$ for all $e,d\in E$,

    ii) $q(e,e)=-\sum_{e\neq d}q(e,d)$,

    iii) $0<\lambda:=\sup_{e\in E}|q(e,e)|<\infty$

  3. Given an $E$-valued stochastic process $X=(X_t)_{t\in[0,\infty)}$ and a function $q:E\times E\rightarrow\mathbb{R}$, $q$ is the Q-matrix of $X$ iff for all $e,d\in E$ $\lim_{t\downarrow0}\frac{1}{t}\left(\mathrm{P}_e[X_t=d]-\delta_{e,d}\right)=q(e,d)$ ($\delta$ is Kronecker's delta)

Theorem 17.25 $q$ is a Q-matrix $\implies$ $q$ is the Q-matrix of a unique Markov process.

Proof (This is only the beginning of Klenke's proof; the rest can be found here.)

Let $I$ be the unit matrix on $E$. Define $p(e,d):=\frac{1}{\lambda}q(e,d)+I(e,d)\space\space\mathrm{for\, }e,d\in E.$

Then $p$ is a stochastic matrix and $q=\lambda(p-I)$.

Let $\left(Y=(Y_n)_{n\in\mathbb{N}_0},(\mathrm{P}_e^Y)_{e\in E}\right)$ be a discrete Markov chain over the measurable space $S_Y=(\Omega_Y,\mathcal{A}_Y)$ with transition matrix $p$ and let $\left(T=(T_t)_{t\geq0},(\mathrm{P}_n^T)_{n\in\mathbb{N}_0}\right)$ be a Poisson process over the measurable space $S_T=(\Omega_T,\mathcal{A}_T)$ with rate $\lambda$. We may assume w.l.g. that $Y$ and $T$ are defined over the product space $S_Y\otimes S_T$.

Set $X_t:=Y_{T_t}$ and $\mathrm{P}_e:=\mathrm{P}_e^Y\otimes\mathrm{P}_0^T$. Then $\mathfrak{X}:=\left(X=(X_t)_{t\geq0},(\mathrm{P}_e)_{e\in E}\right)$ is a Markov process and $p_t(e,d):=\mathrm{P}_e[X_t=d]=\sum_{n=0}^\infty\mathrm{P}_0^T[T_t=n]\mathrm{P}_e^Y[Y_n=d]=e^{-\lambda t}\sum_{n=0}^\infty \frac{\lambda^nt^n}{n!}p^n(e,d)$

1 Answers 1

2

We wish to show that $\mathfrak{X}$ is a Markov process. According to the definition of a Markov process we need to show that the Markov property holds, namely $\forall e\in\mathbb{N}_0\forall s,t\in [0,\infty)\forall B\subseteq\mathbb{N}_0,\space \mathrm{P}_e[X_{s+t}\in B|\mathcal{F}_s]=\mathrm{P}_{X_s}[X_t\in B]\space\space\mathrm{P}_e\mathrm{-a.s.}$ (To qualify as a Markov process, $\mathfrak{X}$ must additionally be shown to satisfy that $\kappa:\mathbb{N}_0\times\mathcal{B}(\mathbb{N}_0)\rightarrow[0,1],\space\space(e,B)\mapsto\mathrm{P}_e[X\in B]$ be a stochastic kernel, but this is obvious if we recall that the implicit topology on $\mathbb{N}_0$ is taken to be the discrete one, hence $\mathcal{B}(\mathbb{N}_0)=\mathbb{P}(\mathbb{N}_0)$.)

Let $e\in\mathbb{N}_0$, $s,t\in[0,\infty)$ and $B\subseteq\mathbb{N}_0$. Since probabilities as well as conditional probabilities are $\sigma$-additive and since $\mathbb{N}_0$ is countable, we may assume w.l.g. that $B$ is a singleton: $B=\{j\}$. So we need to show: $\mathrm{P}_e[X_{s+t}=j|\mathcal{F}_s]=\mathrm{P}_{X_s}[X_t=j]\space\space\mathrm{P}_e\mathrm{-a.s.}$

By definition of "conditional probability", this amounts to showing that for every $G\in\mathcal{F}_s$, $\mathrm{P}_e[\{X_{s+t}=j\}\cap G]=\intop_G \mathrm{P}_{X_s}[X_t=j]\space\mathrm{dP}_e\space\space(*)$

By Dynkin's $\pi$-$\lambda$ theorem it suffices to consider a $G$ of the form $G=\bigcap_{i=0}^n \{X_{u_i}=k_i\}\space\space(**)$ for some $n\in\mathbb{N}_0$, $0 \leq u_0 < u_1 < \cdots < u_n \leq s$ and $k_0, k_1, \dots, k_n\in\mathbb{N}_0$.

Suppose $G$ takes this form. We are content with restricting our attention to the case $n=0$, as the general case is proved quite analogously. Assume therefore $n=0$ and let $u\in[0,s]$ and $k\in\mathbb{N}_0$. We have $\begin{align}\intop_{X_u=k}\mathrm{P}_{X_s}[X_t=j]\space\mathrm{dP}_e &=\sum_{m=0}^\infty\space\intop_{X_u=k\atop X_s=m} \mathrm{P}_{X_s}[X_t=j]\space\mathrm{dP}_e\\ &=\sum_{m=0}^\infty\space\intop_{X_u=k\atop X_s=m} \mathrm{P}_m[X_t=j]\space\mathrm{dP}_e\\ &=\sum_{m=0}^\infty\space\mathrm{P}_e[X_u=k, X_s=m]\mathrm{P}_m[X_t=j]\end{align}$

If we can show that for all $m\in\mathbb{N}_0$, $\space\mathrm{P}_e[X_u=k, X_s=m]\mathrm{P}_m[X_t=j]=\mathrm{P}_e[X_u=k, X_s=m, X_{s+t}=j]$ we're done. Therefore, let $m\in\mathbb{N}_0$.

Now, using the facts that under $\mathrm{P}_e=\mathrm{P}_e^Y\otimes\mathrm{P}_0^T$, $Y$ and $T$ are independent and that $T$, being a Poisson process, has additive, independent increments, we get

$\begin{align}&\mathrm{P}_e(Y_{T_u}=k, Y_{T_s}=m)\mathrm{P}_m(Y_{T_t}=j)\\ &=\sum_{a,b,c=0}^\infty \mathrm{P}_e(Y_a=k, Y_b=m, T_u=a, T_s=b)\mathrm{P}_m(Y_c=j, T_t=c)\\ &=\sum_{a,b,c=0}^\infty \mathrm{P}^Y_e(Y_a=k, Y_b=m)\mathrm{P}_m^Y(Y_c=j)\mathrm{P}_0^T(T_u=a)\mathrm{P}_0^T(T_{s-u}=b-a)\mathrm{P}_0^T(T_t=c) \end{align}$

and

$\begin{align}&\mathrm{P}_e(Y_{T_u}=k, Y_{T_s}=m, Y_{T_{s+t}}=j)\\ &=\sum_{a,b,c=0}^\infty\mathrm{P}_e(Y_a=k, Y_b=m, Y_{b+c}=j, T_u=a, T_s=b, T_{s+t}=b+c)\\ &=\sum_{a,b,c=0}^\infty\mathrm{P}_e^Y(Y_a=k, Y_b=m, Y_{b+c}=j)\mathrm{P}_0^T(T_u=a)\mathrm{P}_0^T(T_{s-u}=b-a)\mathrm{P}_0^T(T_t=c) \end{align}$

Therefore, it suffices to show that for all $a,b,c\in\mathbb{N}_0$, $\mathrm{P}_e^Y(Y_a=k, Y_b=m, Y_{b+c}=j)=\mathrm{P}^Y_e(Y_a=k, Y_b=m)\mathrm{P}_m^Y(Y_c=j)$ Note that since $u\leq s$, $T_u\leq T_s$, so we can safely assume that $a\leq b$.

But since $\left(Y, (\mathrm{P}_e^Y)_{e\in\mathbb{N}_0}\right)$ is a Markov chain, we get from the Markov property (2nd equation below) $\begin{align}\mathrm{P}_e^Y(Y_a=k, Y_b=m, Y_{b+c}=j)&=\mathrm{E}_e^Y\left(\mathrm{P}_e^Y(Y_{b+c}=j|\mathcal{F}_b)\space; Y_a=k, Y_b=m\right)\\ &=\mathrm{E}_e^Y\left(\mathrm{P}_{Y_b}^Y(Y_c=j)\space; Y_a=k, Y_b=m\right)\\ &=\mathrm{E}_e^Y\left(\mathrm{P}_m^Y(Y_c=j)\space; Y_a=k, Y_b=m\right)\\ &=\mathrm{P}_m^Y(Y_c=j)\mathrm{P}_e^Y\left(Y_a=k, Y_b=m\right)\end{align}$

which concludes the proof. $\square$