2
$\begingroup$

Questions

Let $\mathcal B(\mathbb C^n)$ be the space of linear operator on $\mathbb C^n$.

Let $A$ and $B\in \mathcal B(\mathbb C^n)$

  • self-adjoint, i.e., $A=A^*$ and $B=B^*$,
  • non-negative, i.e., $A\geq 0$ and $B\geq 0$ ($A\geq 0\, :\Leftrightarrow \, \big(\forall x\in \mathbb C^n\,,\quad\big\langle x,Ax\big\rangle\geq0\big)$).

Questions

  • Does the limit $\lim_{t\to \infty} \exp(-A-tB)$ exist?
  • If this limit exists, what is this limit in terms of $A$ and $B$?

What I did up to now

  • In the case $AB=BA$, $A$ and $A$ can be simultaneously diagonalized and with $\mathcal H_2:=span(B)\,,\quad \mathcal H_1:=\mathcal H_2^\perp$ we get $\lim_{t\to +\infty} \exp(-A-tB)=\exp(-A\big|_{\mathcal H_1})$ with $A\big|_{\mathcal H_1}$ the endomorphism induced on $\mathcal H_1$ by $A$.
  • If I don't suppose $AB=BA$ I expect that the limit still exists, but $\mathcal H_1$ is not preserved by $A$ anymore. With $\tilde A\in \mathcal B(\mathcal H_1)$ defined by $\forall x_1,y_1 \in \mathcal H_1\,,\quad \big\langle x_1,\tilde A y_1\big\rangle=\big\langle x_1,Ay_1\big\rangle$ I expect that $\lim_{t\to +\infty} \exp(-A-tB)=\exp(-\tilde A)\oplus 0 \in \mathcal B(\mathcal H_1 \oplus\mathcal H_2)=\mathcal B(\mathbb C^n)\,,$ but I have no idea how to prove it.

1 Answers 1

2

Here is not a complete answer, but are some ideas that may be useful. Since positive semidefinite matrices are unitarily diagonalizable, we may suppose that $ B = U\begin{pmatrix}D&0\\0&0\end{pmatrix}U^\ast, \ A = U\begin{pmatrix}P&R\\R^\ast&S\end{pmatrix}U^\ast, $ where $U$ is an $n\times n$ unitary matrix, $D$ is an $r\times r$ positive diagonal matrix, $P$ is an $r\times r$ positive semidefinite matrix and $S$ is an $(n-r)\times(n-r)$ positive semidefinite matrix. We claim that \begin{equation} \lim_{t\to+\infty} \exp(-A-tB) = U\begin{pmatrix}0&0\\0&e^{-S}\end{pmatrix}U^\ast.\tag{1} \end{equation} First, let us make a few observations. WLOG, suppose $U=I$ and $S$ is a nonnegative diagonal matrix. Observe that $ C_t = -A-tB = \begin{pmatrix}-P-tD&-R\\-R^\ast&-S\end{pmatrix} \sim \begin{pmatrix}-P-tD&-\sqrt{t}R\\-\frac{1}{\sqrt{t}}R^\ast&-S\end{pmatrix}, $ where $\sim$ denotes similarity. Hence, by Gershgorin disc theorem, when $t\rightarrow\infty$, exactly $r$ eigenvalues of $C_t$ will tend to $-\infty$ and the other $n-r$ eigenvalues will approach the eigenvalues of $-S$. Since $C_t$ and $\exp(C_t)$ have identical eigenspaces and the eigenvalues are related by exponentiation, we see that $r$ eigenvalues of $\exp(C_t)$ will tend to zero and the rest will approach the eigenvalues of $e^{-S}$. Furthermore, the elements of $\exp(C_t)$ are uniformly bounded.

Before outlining a proof, let us first use a heuristic argument to justify $(1)$. Note that the eigenspaces of $C_t$ are also the eigenspaces of $ \frac1tC_t = \begin{pmatrix}-D-\frac1tP&-\frac1tR\\-\frac1tR^\ast&-\frac1tS\end{pmatrix}. $ When $t\rightarrow+\infty$, we see that $e_1,\ldots,e_r$, where $\{e_1,\ldots,e_n\}$ denotes the canonical basis of $\mathbb{C}^n$, are "almost" eigenvectors of $\frac1tC_t$. And the eigenvalues of $C_t$ that these eigenvectors correspond to are the ones that approach zero.

For the other $n-r$ eigenvectors of $C_t$, recall that their corresponding eigenvalues will approach the eigenvalues of $-S$. Now suppose $\lambda$ and $(u^\ast, v^\ast)^\ast$ form such an eigenpair. Then $ C_t\begin{pmatrix}u\\ v\end{pmatrix} =\begin{pmatrix}-tDu-Pu-Rv\\-R^\ast u-Sv\end{pmatrix} =\begin{pmatrix}\lambda u\\ \lambda v\end{pmatrix}. $ When $t\rightarrow+\infty$, in order that the second equality holds, $u$ should approach zero and $v$ should approach an eigenvector of $-S$. In other words, $e_{r+1},\ldots,e_n$ are also approximate eigenvectors of $C_t$ and $\exp(C_t)$. Therefore it seems plausible that $\exp(C_t)$ would converge to $\begin{pmatrix}0&0\\0&e^{-S}\end{pmatrix}$.

Now, for a rigourous proof of $(1)$, we need some results about eigenvector perturbation. The keypoint is to find an eigenbasis of $C_t$ such that it continuously approaches an eigenbasis of $B$. A MO post has mentioned some literature which I think is worth a look, but here I will employ Theorem 2.7 on p.236 of Stewart and Sun, Matrix Perturbation Theory, Academic Press. The theorem statement is one page long, so I will not state it here. Essentially, in our current case, it says that if all eigenvalues of $D$ are distinct, then there exists a unitary matrix $M_t=I+O(\frac1t)$ and a block upper triangular matrix $T_t=\begin{pmatrix}-tD-P+O(\frac{1}{t})&\ast\\0&-S+O(\frac1t)\end{pmatrix}$ such that $C_t = M_tT_tM_t^\ast$. (As our $C_t$ is Hermitian, this block triangulation is actually a block diagonalization.) Therefore, when $t\rightarrow+\infty$, we have $ \exp(C_t) = \exp(M_tT_tM_t^\ast) = M_t\,\exp(T_t)\,M_t^\ast \rightarrow\begin{pmatrix}0&0\\0&e^{-S}\end{pmatrix}. $ Now, if $D$ has distinct eigenvalues, we can perturb its diagonal to make all eigenvalues distinct. Since the eigenvalues of $\exp(C_t)$ are uniformly bounded, we can use a continuity argument finish the proof. (Edit: The eigenvalues of $D$ are actually not required to be distinct in Stewart and Sun. I was confused by their use of the term "simple invariant subspace" in the theorem then. It turns out that they mean the spectra of the two diagonal blocks of $B$ are disjoint. Since the two diagonal blocks in our case are $D>0$ and $0$, the requirement is always satisfied.)

  • 0
    I read the corresponding part in the Stewart and sun. It is exactly what I wanted. Thank you very much for your detailed explanations.2012-12-28