0
$\begingroup$

Let $(X_n, n \ge 0)$ be a Markov chain. Let $V$ be the state space. Let $\lambda$ and $\tau$ be two probability distribution. Can we say that for any $\lambda$ and $\tau$, there is always a stopping rule $T$, such that for all $v \in V$,

$ P_\lambda(X_T = v) = \tau(v). $


I guess we can say that, if we pick $z$ from $V$ according to $\tau$, then the hitting time $T$ of $z$ is a stopping rule. Since this stopping rule works with all distributions, we can say there is always a stopping rule.

Is this the correct way to prove existence of stopping rule from one distribution to another?

  • 0
    You need some assumptions on the Markov chain. Otherwise there is a trivial counterexample: the chain $X_n=X_{n-1}$ (the system never changes its state). Or, consider the state space $V=\{0,1\}$ and $X_n=1-X_{n-1}$...2012-06-20

1 Answers 1

0

The first hitting time of a given vertex $z$ is a stopping rule but the first hitting time of a randomly chosen vertex $z$ is not, at least in the usual sense. In other words, picking beforehand at random a vertex $z$ according to the distribution $\tau$ and stopping the process at the first hitting time of $z$ (the strategy you suggest in the post) is not a stopping rule.

Another point, to consider once you will have set your definitions straight, is that stopping rules may not exist for transient processes. Consider the Green function $g_\lambda(v)=\sum\limits_{n\geqslant0}\mathrm P_\lambda(X_n=v). $ Obviously, a stopping rule can only yield target distributions $\tau$ such that $\tau(v)\leqslant g_\lambda(v)$ for every $v$. If $g_\lambda(v)\lt1$ for some $v$, this is a restriction. If the process is transient, this may happen, hence I guess you are only interested in recurrent (and even, possibly, positive recurrent) irreductible Markov chains, and then this restriction vanishes since $g_\lambda(v)$ is infinite for every $\lambda$ and $v$.