1
$\begingroup$

The following is a well known result of Markov chain:

Given a Markov chain $(X_t)_{t \ge 0}$, if $T_{ii}$ denote the time of the first return to state $i$ when starting at state $i$, then we have

$$ E[T_{ii}] = \frac 1 {\pi_i}, $$

where $\pi$ is the stationary distribution.

I came up with this proof:

We start the chain with a state chosen by stationary distribution $\pi$. Thus every step of the chain has the same distribution $\pi$. Let $T_{ii}'$ be the time between the first two times that the chain hit state $i$. Then we have $T_{ii}'$ distributed as $T_{ii}$. Since each step, we have $\pi_i$ probability to hit $i$, $T_{ii}'$ is geometry $(\pi_i)$. Therefore we have $$ E[T_{ii}] = E[T_{ii}'] = \frac 1 \pi_i. $$

I kind of feel there must be a mistake. A counter example is that if the chain is simple random walk, then after hitting $i$, the probability to hit $i$ in the next step is $0$, not $\pi_i$. But which step in my previous argument is wrong?

1 Answers 1

3

One cause of confusion might be the notation $T_{ii}$. Here is a correct formulation. Let $T_i=\inf\{n>0\,;\,X_n=i\}$ denote the first hitting time of $i$. Let $\mathrm P_i$ and $\mathrm E_i$ denote the probability and the expectation conditionally on $[X_0=i]$. Then the result you are interested in is that $$ \mathrm E_i(T_i)=1/\pi(i). $$ The second result you try to rely on is probably as follows. Let $S_i=\inf\{n\gt T_i\,;\,X_n=i\}$ denote the second hitting time of $i$. Then I guess your $T_{ii}'$ is $S_i-T_i$. Furthermore, for every initial distribution $\mu$ and in particular for $\mu=\pi$ the stationary distribution of the ergodic Markov chain, the distribution of $S_i-T_i$ under $\mathrm P_\mu$ is the distribution of $T_i$ under $\mathrm P_i$, hence the identity $\mathrm E_\mu(S_i-T_i)=\mathrm E_i(T_i)$ is correct. Also correct is the assertion that $\mathrm P_\pi(X_n=i)=\pi_i(1-\pi_i)^{n-1}$, for every $n\geqslant1$.

But I fail to see why all this should imply that $P_\pi(S_i-T_i=n)=(1-p_i)^{n-1}p_i$ for every $n\geqslant1$ (and in fact this is false).


However, there is a well known way to use your idea to compute $\mathrm E_i(T_i)$. To see this, introduce $T_i^{(0)}=0$ and, for every $k\geqslant0$, $T_i^{(k+1)}=\inf\{n\gt T_i^{(k)}\,;\,X_n=i\}$. Hence, $T_i^{(1)}=T_i$ and $T_i^{(2)}=S_i$. Furthermore, for every initial distribution $\mu$, the sequence $(T_i^{(k+1)}-T_i^{(k)})_{k\geqslant1}$ (but not $k=0$) is i.i.d. and distributed like $T_i$ under $\mathrm P_i$. In particular, when $k\to\infty$,
$$ T_i^{(k)}/k\to \mathrm E_i(T_i), $$ almost surely and in $L^1$, by the strong law of large numbers.

On the other hand, consider $N_n(i)=\sum\limits_{t=1}^n[X_t=i]$ the number of visits of state $i$ before time $n$. Since $X_n$ converges in distribution to $\pi$ when $n\to\infty$, $$ \mathrm E_\mu(N_n(i))=n\pi(i)+o(n). $$ Finally, the pair of asymptotics $\mathrm E_\mu(T_i^{(k)})=k\mathrm E_i(T_i)+o(k)$, $\mathrm E_\mu(N_n(i))=n\pi(i)+o(n)$, and the identity $[N_n(i)\geqslant k]=[T_i^{(k)}\leqslant n]$, all together imply that $$ \mathrm E_i(T_i)=1/\pi(i), $$ since, roughly speaking, $T_i^{(k)}\approx n$ if and only if $N_n(i)\approx k$, when $n$ and $k$ are large.

  • 0
    @Byron: Thanks for the typo.2012-08-02
  • 0
    @did No problem.2012-08-02
  • 0
    @ablmf: Listen, since I included in my post every indication needed to expand it into a full-fledged, excruciatingly detailed, proof, here is a suggestion: elaborate yourself. Either you master, even roughly, the stuff needed to understand your own post, and then this should be an easy task, or you don't, and then I fail to see the point. (Of course, if you have a **specific** question about a **precise** step, this is another story, but at the moment you seem to simply refuse to turn your brain on.)2012-08-02
  • 0
    @did, sorry for my laziness! Thank for your suggestion.2012-08-02
  • 0
    Start with the event $[(\tau-\varepsilon)k\leqslant T_i^{(k)}\leqslant(\tau+\varepsilon)k]$ (with $\tau=\mathrm E_i(T_i)$) and translate it in terms of $(N_n(i))_n$.2012-08-03
  • 0
    ablmf: Please stop erasing your comments.2012-08-12