3
$\begingroup$

Let $(X_t)_{t\ge0}$ be a Markov chain on a finite state space $\Omega$, with transition probability $P$. Let $T$ be a stopping time such that $T=\min \{t\ge 0;X_t \in A \subset \Omega \}$.  If $f(x)=E_x(T)$ is such that

$f(x)=0, x \in A$ $f(x)= 1+\sum_{y \in \Omega }P(x,y)f(y), x \notin A$

How can I show that $f$ is uniquely determined by the previous equation?

I'm not clear on what I have to prove here.

I guess the proof goes as: Suppose $\exists g \ni g \ne f$ and $g(x)=E_x(T)$.

But I don't see any sense here.

  • 0
    I've edited the question. I think it's better, if ever it's not, be merciless. Thanks.2011-09-28

2 Answers 2

2

Ok, so let us put the equation in the one line: $ f(x) = 1_{A^c}(x)+1_{A^c}(x)\sum\limits_{y\in \Omega}P(x,y)f(y).\quad (1) $ Here $1_{A^c}(x)$ is an indicator function of $A^c = \Omega\setminus A$. I.e. $1_{A^c}(x) = 1$ for $x\in A^c$ and $1_{A^c}(x) = 0$ for $x\in A$. Let us consider now the homogeneous version of the very same equation (which I happened to investigate in my first post): $ g(x) = 1_{A^c}(x)\sum\limits_{y\in \Omega}P(x,y)g(y). \quad(2) $

Note again that the equation is linear so the solutions of these equations built the linear space. Of course, $g^* = 0$ is a solution of (2). In the simplest case when $A^c$ is finite, an equation (2) has a unique bounded solution iff $A^c$ does not have absorbing subsets since $g(x)$ is a probability that $x$ will never leave $A^c$.

Of course it's iff the determinant of P' which is build only by rows and columns from $A^c$ is non-zero. This led me to the idea that non-uniqueness of (1) in this case depends only on the absorbing subsets of $A$. About existence of solution for (1) we 'may not care' since there is at least one function which admits this equation, namely $f(x)=\mathsf E_x[T_A]$. On the other hand, such function takes values from the extended real line for which case uniqueness of the solution is unclear for me.

Consider again the previous example. Let $P$ be a unit matrix of dimension two, so $p(1,1) = 1$, $p(1,2) = 0$ and $p(2,1) = 0, p(2,2) = 1$. Let us solve it for $A = \{1\}$. If you write your equation for this case you will have $f(1) = 0$ and $f(2) = 1+f(2)$, so the solution of the latter equation is either $-\infty$ or $\infty$ though I am not so experienced in solution of such equations. Well, you can say that the solution is unique if you are looking only for non-negative solutions.

The point is the following: theory of uniqueness for such equations based on fact that solutions build up the vector space. Hence, for any solution $f^*$ of (1) and non-zero solution $g^*$ of (2) you can claim that $f^*+\alpha g^*$ is a solution of (1) which is different than $f^*$. On the other hand, if $f^*$ take infinite values then $f^*+\alpha g^*$ can coincide (at least in symbols) with $f^*$.

Let us consider arbitrary $A^c$. What does mean that (2) has a non-zero solution? The invariance probability is the maximal solution of (2) which lies in $[0,1]$. Let us denote it by $g^*$. So, for each point in which $g^*>0$ it holds that $\mathsf E_x[T_A] = \infty$ (is it clear?). For such points, $f^* = \infty$ and hence $f^*(x)+\alpha g^*(x) =\infty = f^*(x)$, though this statement is quite informal for me.

I cannot give you a complete formal answer to your question since I was always looking only for bounded solutions of such equations. The latter paragraph gives a guess that the solution is indeed unique since non-uniqueness of solution of (2) will appear only in points where $f^*$ is infinite and hence will not influence $f^*$. On the other hand, I didn't consider the case when (2) may have unbounded solution. E.g. by Lebesgue's convention $0\cdot\infty = 0$ so one can say that $g = 1_{A^c}\cdot\infty$ admits (2), if for any $x\in A^c$ there is a transition to $A^c$, as well as $g=0$.

Anyway, I think you need a person more experienced in solving linear equations with unbounded solutions. the other advise is the following - if you want to calculate the solution, you will certainly need to cut-off points in which $\mathsf E_x[T_A] = \infty$. So you will have something like $ f(x) = \begin{cases}0,&\text{ if }x\in A, \\ \infty,&\text{ if }x\in B \\ 1+\sum\limits_{y\in\Omega}P(x,y)f(y), &\text{ if }x\in (A\cup B)^c \end{cases} $

I haven't deal with the problem of the average hitting time, so maybe there are smarter ways to solve it. Regards.

  • 0
    @NicolasEssis-Breton: that will be interesting, so please put it here.2011-09-30
1

For the general case, the solution to

$f(x)=0, x \in A \tag{1}$ $f(x)= 1+\sum_{y \in \Omega }P(x,y)f(y), x \notin A$

is not unique. The probabilistic meaningfull solution is then given by the least nonnegative solution to the system. As explain in the document pointed by Gortaur http://www.statslab.cam.ac.uk/~james/Markov/s13.pdf.

If $P$ is irreducible, we can show the solution is unique as follow. Suppose $g$ is another solution to the system then $g \ne f$ and

$g(x)=0, x \in A \tag{2}$ $g(x)= 1+\sum_{y \in \Omega }P(x,y)g(y), x \notin A$

Susbtracting $(1)$ and $(2)$, we have

$f(x)-g(x)=0, x \in A \tag{3}$ $f(x)-g(x)= \sum_{y \in \Omega }P(x,y)(f(y)-g(y)), x \notin A \tag{4}$

By $(4)$, $f-g$ is harmonic for $P$ on $A^c=\Omega-A$. If $f-g$ is constant then by $(3)$, $f-g$ is identically zero and $f=g$, which is a contracdiction. Suppose $f-g$ is not constant. Since $f-g$ is harmonic on $A^c$ then it must attain its maximum and minimum value in $A$, by $(5)$ below. Hence, by $(3)$, $f-g$ is identically zero, which is a contradiction.

$(5)$: Maximum principle: Suppose $P$ is the transition matrix of an irreducible Markov chain, with finite state space $\Omega$. If $h$ is not constant and harmonic on $A\subset \Omega$, then $h$ achieves its maximum in $A^c$. A similar claim hold for the minimum of $h$.

  • 0
    @Gortaur: no, I assume that $f$ and $g$ are finite on $A^c$. I think this is the case, if all state of $A^c$ communicate, that is $A^c$ has no absorbing state. But I'm not sure.2011-10-03