6
$\begingroup$

I have a question about equation 6 in this paper.

Simplifying somewhat, the authors state the following

$\int_0^{\infty} e^{-tL} dt = L^{-1}$

$L$ here is a graph laplacian and therefore is a matrix. (Leave aside for the moment that $L$ is singular and thus make the above equation meaningless.)

To derive the equation, I first tried expanding $e^{-tL}$ into a series and integrating the individual terms in the hope that something clean comes out: $\int_0^{\infty} e^{-tL} dt = \sum_{i=0}^\infty \frac{(-L)^i}{i!}\int_0^{\infty} t^i dt$ which lead nowhere for me.

So I tried the alternative tack of just treating $L$ as if it were scalar. In that case, the first equation is obviously true.

My question is, can I treat the matrix $L$ in the exponent as if it were a scalar and just integrate it?

(Bonus question, since $L$ is a graph laplacian and therefore singular, why are the authors inverting it when they should know better?)

  • 0
    @JasonMond If you integrate till $m$, say, then you get $\sum (-L)^i/(i+1)!*m^(i+1)=-L^(-1)(exp(-mL)-I)$. Now, let $m$ go to infinity.2012-03-22

3 Answers 3

2

Following Will Jagy's suggestion, we start with $L = D_{uu} - W_{uu}$. In order to show $L$ is non-singular, it suffices to show that $D_{uu}^{-1} L= I- D_{uu}^{-1}W_{uu}$ is non-singular.

Write $Q=D_{uu}^{-1}W_{uu}$. Then $Q$ has positive entries and the row sums are all strictly less than one. It's not hard to show that $Q^n\to 0$ exponentially fast, so that $(I-Q)^{-1}$ can be obtained directly as $\sum_{n=0}^\infty Q^n$. Done!

  • 0
    Actually, $Q$ need not be symmetric, so it is $\parallel Q \parallel_\infty$2012-03-19
4

The whole point is that the matrix in question is not singular, it is symmetric positive definite. As is common in this sort of paper, many ordinary mathematical comments are left unsaid. Just before equation (1), we find out that $W$ is a real symmetric matrix of strictly positive numbers, called "weights." It need not be positive definite itself. Just before equation (3), we find out that $D$ is the diagonal matrix with $ d_{ii} = \sum_{j=1}^n \; w_{ij}.$ Next, we find $ \Delta = D - W. $ Now, as an $n$ by $n$ matrix, $\Delta$ is symmetric and singular, by construction the vector with every entry equal to $1$ is a null vector.

However, they then go on to take submatrices, starting with formula (4). By the time we get to formula (5), we find that your $ L = D_{uu} - W_{uu} $ is invertible, the reason for this being, in fact, that it is positive definite. This requires proof, and I will put that in when I have time. For now, note these examples: if $n=2$ and $ W \; = \; \left( \begin{array}{cc} a & b \\ b & c \end{array} \right) , $ then $ D -W \; = \; \left( \begin{array}{cc} b & -b \\ -b & b \end{array} \right) , $ and the only possible submatrix is 1 by 1, $ L = (b). $

If $n=3$ and $ W \; = \; \left( \begin{array}{ccc} a & f & e \\ f & b & d \\ e & d & c \end{array} \right) , $ then $ D - W \; = \; \left( \begin{array}{ccc} e + f & -f & -e \\ -f & f + d & -d \\ -e & -d & d + e \end{array} \right) , $ and the possible lower right corners are either 1 by 1 $ L = (d + e) $ or two by two $ L \; = \; \left( \begin{array}{cc} f + d & -d \\ -d & d + e \end{array} \right) . $ Because we required $a,b,c,d,e,f$ to be strictly positive, the possible matrix $L$s depicted are positive definite.

Still looking for a simple proof, but this is what is going on: Let $U$ be a symmetric real matrix of strictly positive elements. Let $T$ be the diagonal matrix such that $T_{ii} = \sum_{j} U_{ij}.$ Then let $S = T - U,$ so that $S$ is symmetric with row-sums all vanishing. The theorem is that $S$ is positive semidefinite. When we then increase the diagonal elements of $S$ by positive reals to arrive at the original $L$ above, the result is positive definite.

3

If you look at the definition of the Riemann integral, you will see that you don't make much use of properties of the codomain of your function (only that it is a vector space, and that you can take limits in it). So you can define the Riemann integral for continuous functions $\mathbb{R}\to M_n(\mathbb{C})$ with no issues.

The fundamental Theorem of Calculus still holds and (-L^{-1}e^{-tL})'=e^{-tL}. Then $ \int_0^ne^{-tL}\,dt=\left.-L^{-1}e^{-tL}\right|_0^n=-L^{-1}e^{-nL}+L^{-1}\xrightarrow{n\to\infty}L^{-1}. $

When $L$ is singular, the assertion makes no sense, and I have no idea what the authors mean. Consider $ L=\begin{bmatrix}0&1\\ 0&0\end{bmatrix}. $ Then $L^2=0$, so $ e^{-tL}=I-tL. $ Claiming that $ \int_0^\infty(I-tL)\,dt $ is a matrix is an interesting statement.

Added: after looking at the claim from Will that $L$ is positive definite, the argument becomes much simpler. Because in this case we can write $ L=\sum_{k=1}^n\lambda_kP_k, $ where the $\lambda_k$ are the eigenvalues of $L$ and the $P_k$ are pairwise orthogonal projections of rank one. Then $ e^{-tL}=\sum_{k=1}^ne^{-t\lambda_k}P_k, $ and so $ \int_0^\infty e^{-tL}\,dt=\sum_{k=1}^n\left(\int_0^\infty e^{-t\lambda_k}\,dt\right)\;P_k =\sum_{k=1}^n\lambda_k^{-1}\;P_k=L^{-1}. $

  • 0
    I have just made an addition to the answer, showing that knowing that $L$ is positive definite makes life easier.2012-03-19