4
$\begingroup$

Having a defined markov chain with a known transition matrix, rather than to calculate the steady state probabilities, I would like to simulate and estimate them.

Firstly, from my understanding there is a transient/warm-up time we have to cater for, therefore run our markov chain for sufficiently long time in order to BE in a steady state. Any ideas how to estimate that?

Secondly, even if we discard the initial k observations, how do we actually measure the probabilities? I have the following two approaches I would use:

  1. after the initial transient period observe which state we are and register that. Rerun the same simulation and register then which state we are. aftern N runs, take average of each occurences of each state to get approximations. Problems: Bloody too inefficient

  2. After the initial transient time, generate N state transitions and count occurences (WITHOUT RESTARTING). Take average. Problems: Each of N samples are not independent

I was wondering whether you guys could point me into the right direction and spot any flaws in my logic. Thank you in advance for your help.

3 Answers 3