24
$\begingroup$

I was working on this problem here below, but seem to not know a precise or clean way to show the proof to this question below. I had about a few ways of doing it, but the statements/operations were pretty loosely used. The problem is as follows:

Show that ${\bf A}^{-1}$ exists if and only if the eigenvalues $ \lambda _i$ , $1 \leq i \leq n$ of $\bf{A}$ are all non-zero, and then ${\bf A}^{-1}$ has the eigenvalues given by $ \frac{1}{\lambda _i}$, $1 \leq i \leq n$.

Thanks.

6 Answers 6

25

(Assuming $\mathbf{A}$ is a square matrix, of course). Here's a solution that does not invoke determinants or diagonalizability, but only the definition of eigenvalue/eigenvector, and the characterization of invertibility in terms of the nullspace. (Added for clarity: $\mathbf{N}(\mathbf{A}) = \mathrm{ker}(\mathbf{A}) = \{\mathbf{x}\mid \mathbf{A}\mathbf{x}=\mathbf{0}\}$, the nullspace/kernel of $\mathbf{A}$.)

\begin{align*} \mbox{$\mathbf{A}$ is not invertible} &\Longleftrightarrow \mathbf{N}(\mathbf{A})\neq{\mathbf{0}}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\mathbf{0}$}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=0\mathbf{x}$}\\ &\Longleftrightarrow \mbox{there exists an eigenvector of $\mathbf{A}$ with eigenvalue $\lambda=0$}\\ &\Longleftrightarrow \mbox{$\lambda=0$ is an eigenvalue of $\mathbf{A}$.} \end{align*}

Note that this argument holds even in the case where $\mathbf{A}$ has no eigenvalues (when working over a non-algebraically closed field, of course), where the condition "the eigenvalues of $\mathbf{A}$ are all nonzero" is true by vacuity.

For $\mathbf{A}$ invertible: \begin{align*} \mbox{$\lambda\neq 0$ is an eigenvalue of $\mathbf{A}$} &\Longleftrightarrow \mbox{$\lambda\neq 0$ and there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\lambda\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}^{-1}\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\frac{1}{\lambda}\mathbf{x} = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{$\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$.} \end{align*}

  • 0
    @Arturo Magidin: I have an interesting question in the following math.stackexchange.com/questions/346937/… I would like to ask your comments?2013-04-12
8

Here is a short proof using the fact that the eigenvalues of ${\bf A}$ are precisely the solutions in $\lambda$ to the equation $\det ({\bf A}-\lambda {\bf I})=0$.

Suppose one of the eigenvalues is zero, say $\lambda_k=0$. Then $\det ({\bf A}-\lambda_k {\bf I})=\det ({\bf A})=0$, so ${\bf A}$ is not invertible.

On the other hand, suppose all eigenvalues are nonzero. Then zero is not a solution to the equation $\det ({\bf A}-\lambda {\bf I})=0$, from which we conclude that $\det({\bf A})$ cannot be zero.

I'll leave the second question to you.

  • 0
    Thanks for the response, it look like one of the ways where I had to show that the det(**A**) is not equal to zero via cramer's inverse.2011-03-10
4

a different approach from Joseph's (which also shows what the form of $A^{-1}$ is)

Let us assume for simplicity that $A$ is diagonalizable (otherwise one can most probably extend the proof using the Jordan normal form). The matrix $A$ can be brought on the form $A= T D T^{-1}$ with $D = \text{diag}(\lambda_i)$ a diagonal matrix containing the eigenvalues, and $T$ and invertible matrix. The inverse of $A$ therefore reads $A^{-1} = (T D T^{-1})^{-1} = T D^{-1} T^{-1}.$ This inverse exists if $D^{-1}$ exist. But $D^{-1}$ we can calculate easily. It is given by $D^{-1} =\text{diag}(\lambda_i^{-1})$ which exists as long as all $\lambda_i$ are nonzero.

  • 0
    One can al$w$ays bring a matrix to Jordan normal form. And this form is also not very hard to invert.2011-03-10
1

I'd like to add a few things to complement Arturo's and Fabian's answers.

If you take the outer product of a unit vector, $\hat{e}$ ($\lvert e \rangle$, Dirac notation), with its dual, $\hat{e}^\ast$ ($\langle e \rvert$), you get a matrix that projects vectors onto the space defined by the unit vector, i.e.

\begin{aligned}

\mathbf{P}_e &= \hat{e} \otimes \hat{e}^\ast \ &= \lvert e \rangle \langle e \rvert \ &= \left( \begin{array}{ccc} \lVert e_{1} \rVert^2 & e_{1} e_2^\ast & \ldots \\ e_2 e_1^\ast & \lVert e_{2} \rVert^2 & \ldots \\ \vdots & \vdots & \ddots \end{array} \right)

\end{aligned}

where

$ \mathbf{P}_e \vec{v} = (\vec{v} \cdot \hat{e}) \hat{e}. $

In other words, $\mathbf{P}_e$ is a projection operator. Using this, you can rewrite a matrix, $\mathbf{A}$, in terms of its eigenvalues, $\lambda_i$, and eigenvectors, $\lvert e_i \rangle$,

$ \mathbf{A} = \sum_i \lambda_i \lvert e_i \rangle \langle e_i \rvert$

which is called the spectral decomposition. From this, it is plain to see that any eigenvector, $\hat{e}_i$, with a zero eigenvalue does not contribute to the matrix, and for any vector component in one of those spaces, $\mathbf{P}_{e_i}\vec{v} = \vec{v}_i$, $\mathbf{A} \vec{v}_i = 0.$

This implies that the dimensionality of the space that $\mathbf{A}$ operates on is smaller than the dimension of the space used to describe it. In other words, $\mathbf{A}$ does not posses full rank, and is not invertible.

1

Here is a different approach that gives more information. It shows that the algebraic multiplicities match up as well. I'll defer to previous answers for the fact that a matrix is invertible iff $0$ is not an eigenvalue, which will be used implicitly below. I just focus on the eigenvalues of the inverse.

Suppose $A$ is an invertible $n \times n$ matrix. Assuming (WLOG) we're working in an algebraically closed field, the eigenvalues are the roots of the polynomial $\det(A-\lambda I) = \prod_{1 \leq i \leq n} (\lambda_i - \lambda)$. Using the properties of determinants, we get:

$ \det(A^{-1} - \lambda I) = \det(A^{-1}) \det(I - \lambda A) = \det(A^{-1}) ( -\lambda)^n \det(A - \lambda^{-1}I)$

$ = \det(A^{-1}) ( -\lambda)^n \prod_{1 \leq i \leq n} (\lambda_i - \lambda^{-1}) = \det(A^{-1}) \prod_{1 \leq i \leq n} (1 - \lambda \lambda_i) $

$ = \det(A)^{-1} \prod_{1 \leq i \leq n} \lambda_i \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda) = \det(A)^{-1} \det(A) \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda)$

$ = \prod_{1 \leq i \leq n} (\lambda_i^{-1} - \lambda).$

0

I like the characteristic polynomial way, similar to @Joseph's approach. Characteristic polynomial given by $\det(A-\lambda I)=f(\lambda)$ is a polynomial of degree $n$. Thus it must have $n$ roots (real or complex). Now if $\lambda_k$ is an eigenvalue of $A$ then we have $\det(A-\lambda_k I)=0$. But this means that $f(\lambda_k)=0$, which means that $f$ can be factored (according to some polynomial theorem) as:

$f(\lambda)=(\lambda_k-\lambda)g(\lambda)$

Where $g(\lambda)$ is a polynomial of degree $n-1$ (and may also satisfy $g(\lambda_k)=0$). We can continue in this fashion, and we get:

$f(\lambda)=\prod_k(\lambda_k-\lambda)$

Now $\det(A)=\det(A-0I)=f(0)=\prod_k\lambda_k$. And we have your if and only if, as well as the answer to the second part in one go.