1
$\begingroup$

I am reading Norris's "Markov Chains" and would appreciate an explanation of the following bit.

After stating the Markov property, it is said that (on page 4)

In general, any event A determined by $X_0, ...,X_m$ may be written as a countable disjoint union of elementary events $A=\bigcup_{k=1}^{\infty} A_k$

What are these $A_k$'s and why are there infinitely many of them? How does this follow from the Markov property?

Thanks.

  • 1
    Seems to have nothing to do with the Markov property. How *elementary events* are defined?2011-10-09
  • 0
    @DidierPiau: Would I be right in thinking that an *elementary event* is $\omega\in \Omega$, where $\Omega$ the sample space? How does this link with the "determined by $X_0,...,X_m$" bit?2011-10-09
  • 0
    No. Events are not elements of $\Omega$ but subsets of $\Omega$. For example (in my answer below), each $[Y=y]=\{\omega\in\Omega\mid Y(\omega)=y\}$ is an event.2011-10-09
  • 0
    According to the answers below, you need to look up the definition of "elementary event". One hopes it occurs in Norris's "Markov Chains" before this point!2011-10-09

2 Answers 2

2

Norris considers denumerable Markov chains hence $Y=(X_0,\ldots,X_m)$ has values in a countable set $\mathcal Y_m$. Thus any event in the sigma-algebra generated by $Y$ has an expression as $A=\bigcup_{y\in \mathcal Y(A)}[Y=y]$, for a given $\mathcal Y(A)\subseteq\mathcal Y_m$. Each $[Y=y]$ is probably what Norris calls an elementary event.

Thus, this has nothing to do with the Markov property but is a consequence of the fact that the random variables considered can take at most countably many values.

  • 0
    So if $X_n$ can take values in $I$, then $y$ is a length-$m$ vector with entries being elements of $I$ and $\mathcal Y_m$ is the set of all such vectors?2011-10-09
  • 0
    I assume $A=\{X_0=i_0,...,X_m=i_m\}$ (please correct me if I'm wrong) but then doesn't this correspond to one unique $y$?2011-10-09
  • 0
    Yes, if every $X_n$ takes values in $I$, one should consider $\mathcal Y_m=I^{m+1}$. And the (elementary) event $A$ you wrote in your comment is $A=[Y=y]$ for $y=(i_0,\ldots,i_m)$.2011-10-09
  • 0
    Thanks :-) So if this corresponds to a unique $y$, why are we taking a union? (Sorry for my being daft.)2011-10-09
  • 0
    Because every event is not elementary. Consider $m=2$ and $A=[X_0=i_0,X_2=i_2]$ for example. Then $A$ is not $[Y=y]$ for any single $y$ but $A$ is the union of the elementary events $[Y=y]$ over every $y$ in the set $\{i_0\}\times I\times\{i_2\}$.2011-10-09
2

My guess is that Norris is assuming a countable state space, and each elementary event is either empty or specifies the outcome for each of $X_0, \ldots, X_m$. There do not have to be infinitely many of them. And this has nothing at all to do with the Markov property.