1
$\begingroup$

I'm having trouble comprehending the following.

http://www.stat.berkeley.edu/users/pitman/s205s03/lecture28.pdf in the proof of theorem 28.3.

It says that $H_t(x,y)=H_t\delta_y(x)$, where $\delta_y(x)=1$ if $x=y$ and $0$ if $x\neq y$.

I'm stuck with understanding this. Please help?

Also why is $E(H_t f)=Ef$ ? I particularly don't know why $K(c)=c$ if $c$ is a constant and $K$ is a Markov operator, because I thought $K$ could only operate on functions. Could we consider $c$ as a function too although it really is just a scalar?

EDIT I know this is another late question but about the same paper, there's an equality I don't understand. Near the end, it says

$|h_t(x,y)-1|=\left|\sum_{z}(h_{t/2}(x,z)-1)(h_{t/2}(z,y)-1)\pi(z)\right|$

On the RHS it looks like I should get $|\sum_{z} (h_{t/2}(x,z)h_{t/2}(z,y)-h_{t/2}(x,z)-h_{t/2}(z,y)+1) \pi(z)|.$

However I can't simplify that to be the LHS.

And lastly, the inequality $\left\|H_t-E\right\|_{2\to\infty}\leq \left\|H_{t_1}\right\|_{2\to\infty} \left\|H_{t_2}-E\right\|_{2\to 2}$ for $t=t_1+t_2.$

These have been two questions I have been unable to solve and they would help me greatly in understanding Markov bounds. Thanks!

pr.probability

  • 0
    I've looked a bit deeper into the paper, but I'm a bit confused. I expected that stochasticity of $H_t(x,y)$ and stationarity of the measure $\pi(z)$ would suffice to finalize the argument (see my edit below) but somehow, it doesn't work out that way. Either something went wrong along the way with the summation convention or the definition of small $h_t(x,y)$ is incorrect or I'm just doing something wrong.2011-02-22

1 Answers 1

1

The first is an application of the definition of $H_t$ as an operator which works on functions.

$H_t f(x) = e^{-t} \sum_{i=0}^{\infty} \frac{t^i K^i f(x)}{i!} \; .$

Apply it on the function $\delta_y$ to give

$H_t \delta_y(x) = e^{-t} \sum_{i=0}^{\infty} \frac{t^i K^i \delta_y(x)}{i!} \; .$

So, the crucial thing to work out is what is the action of $K$ on $\delta_y$. Again using the definition, this becomes

$K \delta_y (x) = \sum_{z \in \mathcal{X}} K(x,z)\delta_y(z)$

Since $\delta_y(z)=1$ when $z=y$ and $0$ elsewhere this simplifies to

$K \delta_y (x) = K(x,y) \; .$

Applying K $i$ times to $\delta_y(x)$ should give you the $i$^th power of the stochastic matrix $K(x,y)$ which is denoted in the text by $K^i(x,y)$. (This confused me at first, since I though they meant an ordinary power of the $(x,y)$-component.) So for instance, for power two, this means

$K^2 \delta_y(x) \equiv K^2(x,y) = \sum_{z \in \mathcal{X}} K(x,z)K(z,y) \; .$

Putting this all together, we get

$H_t \delta_y(x) = e^{-t} \sum_{i=0}^{\infty} \frac{t^i K^i(x,y)}{i!} \; .$

But this is exactly how $H_t(x,y)$ is defined. So up till there, much ado about nothing, just some definitions.

EDIT 1 Purely using the definition

$h_t(x,y)=\frac{H_t(x,y)}{\pi(y)}$

one can show that

$\begin{eqnarray} h_t(x,y) & = &\frac{H_t(x,y)}{\pi(y)}=\frac{\sum_z H_{t/2}(x,z)H_{t/2}(z,y)}{\pi(y)} \\ & = &\sum_z \frac{H_{t/2}(x,z)}{\pi(z)}\frac{H_{t/2}(z,y)}{\pi(y)}\pi(z) = \sum_z h_{t/2}(x,z)h_{t/2}(z,y)\pi(z)\end{eqnarray}$