1
$\begingroup$

In Durrett's probability text book, the Markov chain transition probability is defined as:

A function $p:S \times \Lambda \rightarrow [0,1]$ is called a transition probability, if
(i) For each $x \in S$, $A \rightarrow p(x,A)$ is a probability measure on $(S,\Lambda)$;
(ii) For each $A \in \Lambda$, $x \rightarrow p(x,A)$ is a measurable function on $(S,\Lambda).$

How to understand this definition? Can I regard $x$ as the starting state and $A$ as the end state?

Why define transition probability like this, instead of $p:\Lambda \times \Lambda \rightarrow [0,1]$ or some other ways? Why is the 2nd requirement necessary?

  • 0
    Have you read the first chapter, Measure Theory, where they introduce you to measures, measurable space, measurable maps; which are essential to understand this definition?2017-02-17
  • 0
    Yes, but still I find this definition is hard to understand.2017-02-21

0 Answers 0