4
$\begingroup$

From http://masi.cscs.lsa.umich.edu/~crshalizi/notabene/ergodic-theory.html

irreducible Markov chains with finite state spaces are ergodic processes, since they have a unique invariant distribution over the states. (In the Markov chain case, each of the ergodic components corresponds to an irreducible sub-space.)

By "ergodic processes", I understand it to be the same as "ergodic measure-preserving dynamic system", if I am correct.

As far as I know an ergodic measure-preserving dynamic system is a mapping $\Phi: T \times S \to S$ that satisfies a couple of properties, where $S$ is the state space, and $T$ is the time space. Sometimes there is a measure preserving mapping on $S$ that can generate the system by repeating itself.

So I wonder how a Markov chain can be written as a mapping $\Phi: T \times S \to S$, and what the measure preserving mapping that generates the Markov chain is?

Thanks!

  • 0
    I found [Robert Gray's book](http://www-ee.stanford.edu/~gray/arp.pdf) really helpful. I thought he does a good job of explaining how to convert back and forth between Markov processes and dynamical systems. The upshot seems to be that you put a probability measure on the output space consisting of sequences generated from the process.2014-04-20

2 Answers 2

5

The article talks about a (stationary) Markov chain ${(X_n)}_{n \in \mathbb{Z}}$ in discrete time with each $X_n$ taking its values in a finite set $E$. The canonical space of the Markov chain is the product set $E^{\mathbb{Z}}$. The trajectory $X=(\ldots, X_{-1}, X_{0}, X_1, \ldots)$ of the Markov chain is a random variable taking its values in $E^{\mathbb{Z}}$. Denoting by $\mu$ its distribution (which could be termed as the law of the Markov process) then $\mu$ is invariant under the classical shift operator $T \colon E^{\mathbb{Z}} \to E^{\mathbb{Z}}$. Then the Markov chain can be considered as the dynamical system $(T,\mu)$. In fact here we only use the fact that ${(X_n)}_{n \in \mathbb{Z}}$ is a stationary process. In the Markov case we can say in addition that the ergodicity of $T$ is equivalent to the irreducibility of ${(X_n)}_{n \in \mathbb{Z}}$.

  • 0
    Thanks! (1) Is the classical shift operator $T$ defined as right shifting a function from Z to E by one unit? (2) Your reply says that a stationary Markov chain induces a measure preserving dynamic system on the space of sample paths. However, when I asked my question, I was thinking about if a Markov chain itself is a (measure preserving or random) dynamic system. Please see my related question here http://math.stackexchange.com/questions/144062/is-a-markov-process-a-random-dynamic-system.2012-11-18
1

The invariant probability measure is the one that assigns to state $i$ the probability $p_i$ defined as the limit, as time goes to infinity, of the fraction of time steps that the chain has been in state $i$.

Strictly speaking $p_i$ is the "almost sure" limit. With probability 1 (in the probability measure on infinitely long walks of the Markov chain) the fraction will converge to $p_i$, and statistically unusual walks where something else happens have probability 0.

  • 0
    Thanks! But what is the $\Phi$ for a Makrov chain?2012-10-20
  • 0
    The definition of masure preserving dynamic system is here http://en.wikipedia.org/wiki/Measure-preserving_dynamical_system#Definition. I wonder what $T$ in that definition (same as $\Phi$ I asked here) is for a Markov chain?2012-10-20
  • 0
    Wikipedia is talking about the ergodic theory of a non-random map T that preserves a probability measure. The ergodic theorem is that this invariant measure can be defined in the same way as for the Markov chain, as occupation frequency of regions in the state space. Here, you either consider T as stochastic (advance one step in the random walk) or as a fixed, non-random map on the set PM of probability distributions on the Markov chain state space M.2012-10-20
  • 0
    Why isn't T (i.e. my $\Phi$) a mapping from state space of the Markov chain to itself?2012-10-20
  • 0
    It can be that, but then it has to be stochastic (with the state to state transition probabilities defined by the chain), otherwise you are taking a deterministic walk on the state space. Markov chains model a type of random walk.2012-10-20
  • 0
    Thanks! (1) In the quote does "ergodic processes", which Markov chains may be, mean the same as "ergodic measure-preserving dynamic system" defined in the Wikipedia link? (2) "The ergodic theorem is that this invariant measure can be defined in the same way as for the Markov chain, as occupation frequency of regions in the state space." I don't quite understand that. Does " invariant measure" mean the one on the state space of a MC?2012-10-20
  • 0
    You can unify them into one formalism (do the PM construction on the ergodic side) but given the already clear intuitions in both cases, there is no pressure to do that and it could complicate exposition. (2): the ergodic theory invariant measure assigns to region S in state space the limiting fraction of time the system is in S.2012-10-20
  • 0
    (3) How are the two cases like specifically? "either consider T as stochastic (advance one step in the random walk)", or "as a fixed, non-random map on the set PM of probability distributions on the Markov chain state space M."2012-10-20
  • 0
    Specifying one state is a special case of giving a probability distribution of what state the chain is in. Then perform the random walk one step. The effect of this is a map from PM to PM.2012-10-20