3
$\begingroup$

So I'm seeing a Dirichlet form written as

$$\mathscr{E}(f,f) = \frac{1}{2} \sum_{x,y} |f(x)-f(y)|^2 K(x,y)\pi(x)$$

where $K(x,y)$ is the probability of taking a step to state $y$ from $x$.

And the conventional way of writing it seems to be

$$\mathscr{E}(f) = \int \left|\triangledown f\right|^2 d\mu. $$

How come the two expressions are equivalent. I just don't see how $K(x,y)\pi(x)$ could be analogous to $d\mu$ like $(f(x)-f(y))^2$ is analogous to $|\triangledown f|^2.$ If it were $\pi$, I could understand, but why are they using $K(x,y)\pi(x)?$

I hope this isn't a stupid question (I feel like it is).

  • 0
    You may want to look at Qiaochu Yuan's answer to [this question on the discrete Laplacian](http://math.stackexchange.com/questions/6328/discrete-version-of-laplacian). He describes the discrete Lapacian on a regular lattice. You appear to be looking at what amounts to a weighted graph. In that case, you need to know whether $x$ and $y$ are edge connected and how 'strongly'. You need $K(x,y)$ for that.2011-03-23
  • 0
    Did you get something out of the answer below?2011-04-07

1 Answers 1

1

A short introduction was given last year at PIMS Summer School in Probability by Zhenqing Chen talking about Dirichlet Form Theory and Invariance Principle, see the slides of the first lecture.

A translation from the continuous setting you know to the discrete setting is as follows. Consider a graph $G=(V(G),E(G))$ with vertex set $V(G)$ and edge set $E(G)$ and a Markov chain on $G$ of kernel $K$. This means that $K$ is defined on $V(G)\times V(G)$ and that $K(x,y)=0$ if $(x,y)\notin E(G)$. The gradient of a function $f$ defined on $V(G)$ is the function $\nabla f$ defined on $E(G)$ by $$ \nabla f(x,y)=f(y)-f(x). $$ Then $\mathcal{E}(f)$ is simply the $\ell^2$ norm of $\nabla f$ with respect to the measure $\mu$ defined on $E(G)$ by $\mu(x,y)=\pi(x)K(x,y)$.

In the continuous case $\mathcal{E}(f)$ for a function $f$ defined on $\mathbb{R}^n$ (which is the analogue of $V(G)$ in the discrete setting) is the $L^2$ norm of the usual gradient $\nabla f$, defined on the tangent space $T_x\mathbb{R}^n$ (which is the analogue of $E(G)$), but the difference is not visible because at every $x$ in $\mathbb{R}^n$ the tangent space $T_x\mathbb{R}^n$ is $\mathbb{R}^n$ itself.

  • 0
    Also another thing I don't understand is why $$\frac{E_{\mu}(\left\|\triangledown e^{2\lambda F}\right|_2^2)}{E_{\mu}(e^{2\lambda F})} \leq \sup \left\|\triangledown F\right\|_2^2$$2011-03-23
  • 0
    This one is easy: write $\nabla\mathrm{e}^{2\lambda F}$ as $G\nabla F$ for a well chosen scalar function $G$.2011-03-23
  • 0
    That's where I'm stuck. I'm trying to figure out what $G$ should be. (Someone who is very unfamiliar with gradients :/...)2011-03-23
  • 0
    Come on! You write a whole post referring to gradients (in Euclidean spaces) as the easy case, and you do not know the *definition* of the gradient of a function... You must be joking.2011-03-23
  • 0
    OK. So what is $G$?2011-03-23
  • 0
    I know the definition. Just to make sure though, the gradient of $e^{2\lambda F}$ is $2\lambda e^{2\lambda F}\triangledown F$?2011-03-23
  • 0
    It is. (But why did you delete your penultimate comment?)2011-03-23
  • 0
    I was going to clarify that I lacked experience in computing multivariate calculus material (never took a class on it because my college is retarded and skips straight into real analysis). So I guess $G$ is defined by $G(v) = 2\lambda e^{2\lambda F}v$. Hmm...2011-03-23
  • 0
    Now I'm unable to finish this off because of the extra $2\lambda$ in $2\lambda e^{2\lambda F}.$ This doesn't look like a simple operator norm or Cauchy Schwarz inequality. I'm clearly missing something here...2011-03-23
  • 0
    Nevermind, I figured it out. Thanks for the help! Another question I have is why is there a $1/2$ in the discrete case of the Dirichlet form. I don't see an intuitive reason why this is the case.2011-03-24
  • 0
    @Glassjawed As Didier noted, the gradient is defined on edges. The $\frac{1}{2} $ prevents one from double counting the edges when one sums over the vertices.2011-03-24