I'm having trouble comprehending the following.
http://www.stat.berkeley.edu/users/pitman/s205s03/lecture28.pdf in the proof of theorem 28.3.
It says that $H_t(x,y)=H_t\delta_y(x)$, where $\delta_y(x)=1$ if $x=y$ and $0$ if $x\neq y$.
I'm stuck with understanding this. Please help?
Also why is $E(H_t f)=Ef$ ? I particularly don't know why $K(c)=c$ if $c$ is a constant and $K$ is a Markov operator, because I thought $K$ could only operate on functions. Could we consider $c$ as a function too although it really is just a scalar?
EDIT I know this is another late question but about the same paper, there's an equality I don't understand. Near the end, it says
$|h_t(x,y)-1|=\left|\sum_{z}(h_{t/2}(x,z)-1)(h_{t/2}(z,y)-1)\pi(z)\right|$
On the RHS it looks like I should get $|\sum_{z} (h_{t/2}(x,z)h_{t/2}(z,y)-h_{t/2}(x,z)-h_{t/2}(z,y)+1) \pi(z)|.$
However I can't simplify that to be the LHS.
And lastly, the inequality $\left\|H_t-E\right\|_{2\to\infty}\leq \left\|H_{t_1}\right\|_{2\to\infty} \left\|H_{t_2}-E\right\|_{2\to 2}$ for $t=t_1+t_2.$
These have been two questions I have been unable to solve and they would help me greatly in understanding Markov bounds. Thanks!
pr.probability