In Durrett's probability text book, the Markov chain transition probability is defined as:
A function $p:S \times \Lambda \rightarrow [0,1]$ is called a transition probability, if
(i) For each $x \in S$, $A \rightarrow p(x,A)$ is a probability measure on $(S,\Lambda)$;
(ii) For each $A \in \Lambda$, $x \rightarrow p(x,A)$ is a measurable function on $(S,\Lambda).$
How to understand this definition? Can I regard $x$ as the starting state and $A$ as the end state?
Why define transition probability like this, instead of $p:\Lambda \times \Lambda \rightarrow [0,1]$ or some other ways? Why is the 2nd requirement necessary?