1
$\begingroup$

Let $A$ be a k-by-k matrix and $\sigma(A)$ its spectrum, or the collection of eigenvalues of $A$.

If we know $\lambda\notin\sigma(A)$, then $\lambda$ is at a positive distance to all points in the spectrum since the latter is compact.

I wonder whether there is a bound for the norm of the inverse of $\lambda I-A$, maybe in terms of the distance from $\lambda$ to the spectrum. You can use all kinds of norms on $\|(\lambda I-A)^{-1}\|$.

Thanks!

  • 0
    What do you want such a bound for? Have you tried the spectral radius formula?2012-06-15
  • 0
    Is there any interest in cases where $\lambda $ is "near" some point in $\sigma (A)$?2012-06-15
  • 0
    @QiaochuYuan I am considering the spectrum of T, a direct sum of countably many matrices, A_n, of bounded sizes. If $\lambda$ is not in the spectra of any of the matrices, then each $\lambda-A_n$ would be invertible, but it remains to prove the norms of their inverses are uniformly bounded.2012-06-15
  • 0
    @Hui: there's no reason to believe that the norms of their inverses are uniformly bounded. Take a direct sum of $1 \times 1$ matrices with entries $1 - \frac{1}{n}, n \in \mathbb{N}$ and $\lambda = 1$.2012-06-15
  • 0
    @QiaochuYuan of course you are right since 1 is a limit point of the union of the spectra of all matrices. But I am looking at the case when $\lambda$ is at a positive distance to the closure of their union, which is a compact set.2012-06-15
  • 0
    Heve you seen this paper http://matwbn.icm.edu.pl/ksiazki/zm/zm22/zm2218.pdf Authors deals with estimates of resolvents in terms of distance from operators spectrum.2012-06-15
  • 0
    @Norbert It seems relevant. Will check it tmr. Thanks, Norbert!2012-06-16
  • 1
    @Norbert Well, I checked that paper. The bad thing is that it gives only the estimates for self-adjoint operators and operators differ from self-adjoint ones 'not much', and thus is quite restrictive. I guess there would be some more general result when all operators are assumed to be k-by-k matrices.2012-06-16
  • 0
    In general, I don't think you will get a uniform bound only in terms of distance from the spectrum, it will have to also involve the size of the matrix. I am in a hurry right now but you can see what I mean if you consider the matrix which has 1s above leading diagonal and 0s everywhere else.2012-06-17
  • 0
    @YemonChoi I agree with you. I was first working on finding the spectrum of the direct sum of arbitrary operators. But after I found a similar example to the one you mentioned, I realize I have to first put a bound on the sizes of the blocks. I guess there may be a better chance if we only focus on matrices of size k-by-k.2012-06-17
  • 0
    You also need to put a bound on the (operator) norm of the matrix. Otherwise I can take the matrix from my previous comment and multiply it by some huge constant; this does not change the spectrum, but it will make the norm of the resolvent arbitrarily large.2012-06-18

1 Answers 1

2

Here is a crude upper bound. It might not be good enough for what you are after, but without further constraints on your matrices, it is hard to see how one can do substantially better.

Throughout, I am using the operator norm. $\newcommand{\norm}[1]{\Vert#1\Vert} \newcommand{\Cplx}{{\bf C}}\newcommand{\lm}{\lambda}$

Fix $k$. Let $A$ be a $k\times k$ matrix with complex entries. Schur's theorem from linear algebra tells us that there is an upper triangular matrix $B$ and a unitary matrix $U$ such that $A=U^*BU$.

Let $d_1,\dots, d_n$ be the diagonal entries of $B$ (these form the spectrum of $B$, and hence of $A$, as we will shortly see). For $\lm\in\Cplx\setminus\{d_1,\dots,d_n\}$, let $D_\lambda$ be the diagonal matrix whose entries are $(\lm_1-d_1)^{-1}, \dots, (\lm_k-d_k)^{-1}$, and put $$ C_\lm = D_\lm^{-1} (\lm I - B) $$ Then $C_\lambda$ is an upper triangular matrix with each diagonal entry equal to $1$. Since $(C_\lm - I)^k=0$, $C_\lm$ is $I$ + a nilpotent matrix, and so is invertible. In fact, just by the usual formula for $(1+x)^{-1}$, we have $$ C_\lm^{-1} = \sum_{j=0}^{k-1} (-1)^j(C_\lm-I)^j $$ and thus $$ (\lm I- B)^{-1} =(D_\lm C_\lm)^{-1} = \sum_{j=0}^{k-1} (-1)^j(C_\lm-I)^j D_\lm^{-1} $$

Now we just use the triangle identity and submultiplicativity of the norm. Let $d(\lm) = {\rm dist}(\lm, \sigma(B)) = \min_j \vert d_j-\lm\vert$. Then $\norm{D_\lm^{-1}} = d(\lm)^{-1}$, and $$ \norm{ (\lm I- B)^{-1} } \leq \sum_{j=0}^{k-1} d(\lm)^{-j-1} \norm{\lm I - B}^j = d(\lm)^{-1} \frac{d(\lm)^{-k}\norm{\lm I - B}^k - 1}{d(\lm)^{-1}\norm{\lm I-B} -1 } . $$ Since $B$ is unitarily equivalent to $A$, the same inequality holds with $B$ replaced by $A$.

  • 0
    Thanks, Yemon. Actually your answer suffices for my problem. As you said in your comment, we need to put a bound for the norm of the matrices in the direct sum, but there is already one bound for them. Since their direct sum of them is a bounded operator, the matrices has an uniform bound for their norms. Then we can use this norm in your last inequality to get a bound for the norm of their inverses.2012-06-18
  • 0
    On the other hand, I believe your bound is actually sharp. Maybe we can find a matrix for which the equality holds.2012-06-18