1
$\begingroup$

To find the stationary distribution of a Markov Chain, I believe I must solve for $\vec{s} = \langle s_0, s_1 \rangle$ in $\vec{s} = \vec{s}Q$, where $Q$ is the transition matrix.

$Q$, in my case, is

$ \left( \begin{array}{cc} p & 1-p \\ 1-q & q \end{array} \right) $

where $Q_{ij}$ is the probability of moving from state $i$ to state $j$ (row $i$, column $j$). When I solve for $s_0$ and $s_1$, however, I get

$ s_0 = s_0 p + s_1 (1-q) \\ s_1 = s_0 (1 - p) + s_1 q $

Subsequently,

$ s_0 (1 - p) = s_1 (1 - q) \\ s_1 (1 - q) = s_0 (1 - p) $

These two equations look identical. Does that mean there are an infinite number of stationary distributions for this Markov chain?

Thanks for helping a Markov Chain newb :)

  • 0
    Well, think what it would mean if probabilities added up to $3$ --- or to $-7$.2012-12-08

1 Answers 1

1

As in the comments, the condition you are missing is that the probabilities must sum to 1 ($s_0+s_1=1$).

However, this shouldn't surprise you. There is always "one more equation than you need and one missing" when you write them out like this. That missing condition is the normalisation, that the probabilities must sum to 1. Why is there an extra equation?

It's a property of the $Q$ matrix. The $Q$ matrix has rank 1 less than its dimension (here rank 1) because the final column is redundant. If it were left blank you could 'fill in the gaps': $\begin{pmatrix} p & *\\ 1-q & ** \end{pmatrix}.$ You know that each row is a probability vector because it describes what can potentially happen. When in state $0$ the chain can remain in state $0$ or move to state $1$. As these are the only things that can happen the probabilities $p+*=1$ so $*=1-p$. Similarly $**=q$.