5
$\begingroup$

Suppose $p=\begin{bmatrix} 0& 1\over 3 &0 &2\over 3 \\ 0.3& 0& 0.7 &0 \\ 0& 2\over 3&0 &1\over3 \\ 0.8& 0& 0.2& 0 \end{bmatrix}$is the transition probability matrix of a Markov chain with state space {1, 2, 3, 4}. How to get the limiting probabilities of the Markov chain. I think if more different alternative answer and apporach would be better. In fact i know a little bit about limiting probability but not sure how to apply in this question. Clear step to illustrate how it works or explaination would be appreciated and i wanna learn how others interpret the concept of probability

  • 0
    Note that each state has period two, since if you start in positions $1$ or $3$, then after one step you will be at $2$ or $4$ and then after another back at $1$ or $3$.2012-11-20

1 Answers 1

1

To find an invariant distribution for a Markov chain, which will give you information about the long term probability, you can use two methods. Solving the "left hand equations" for an irreducible, recurrent Markov chain: $\pi_i = \displaystyle \sum_{j\in I} P_{ji} \pi_j$

(Where $I$ is the state space)will give you an invariant measure, and the restriction

$\displaystyle \sum_{j\in I} \pi_j = 1$

will give you an invariant distribution. These equations give the long term proportion of time spent in each state $i$ as $\pi_i$. This is invariant because it is saying "the probability of being in a state $i$ is the same as the sum of the probabilities being in any other state $j$ (which is $\pi_j$) and then moving into state $i$ (which is $P_{ji}$)

When you're more comfortable with the idea of invariant distributions, you can often save time (as you can in this case) by looking at the so-called "detailed balance equations", but they're more complicated, and I've run out of time! It's best to start with the basics though, and look up more info on detailed balance a bit later if you ask me.