24
$\begingroup$

It's a simple exercise to show that two similar matrices has the same eigenvalues and eigenvectors (my favorite way is noting that they represent the same linear transformation in different bases).

However, to show that two matrices has the same characteristic polynomial it does not suffice to show that they have the same eigenvalues and eigenvectors - one needs to say something smart about the algebraic multiplicities of the eigenvalues. Moreover, we might be working over a field which is not algebraically closed and hence simply "don't have" all the eigenvalues. This can be overcome, of course, by working in the algebraic closure of the field, but it complicates the explanation.

I'm looking for a proof that is simple and stand-alone as much as possible (the goal is writing an expository article about the subject, so clarity is the most important thing, not efficiency).

  • 2
    How do you prove that two similar matrices have the same eigenvectors?2011-12-02
  • 0
    Take the matrix $A=diag(2,1)$. Then this is similar to $B=\left[\begin{array} 1 & 1 \\ 0 & 1\end{array}\right] diag(2,1) \left[\begin{array} 1 & -1 \\ 0 & 1\end{array}\right]=\left[\begin{array} 2 & -1 \\ 0 & 1\end{array}\right]$. Now, $\left[\begin{array} 0 \\ 1 \end{array}\right]$ is an eigenvector of $A$ but not of $B$.2011-12-02
  • 0
    sorry, i messed up with latex typing...2011-12-02
  • 1
    My point is that similar matrices do not have in general identical eigenvectors.2011-12-02
  • 0
    $A = [2 \, \, \, 0; 0 \, \, \, 1]$, $T = [1\, \, \, 1; 0 \, \, \, 1]$, $B=T A T^{-1}$. Check that $e_2=[0;1]$ is an eigenvector of $A$ but not of $B$.2011-12-02
  • 0
    Indeed, this is just plain wrong; what is correct is that two similar matrices can be viewed as representing the same linear transformation in different bases, and then their eigenvectors are "the same" in the sense that they are two representations (in the different bases) of the coordinates of the eigenvectors of the transformation.2011-12-03

1 Answers 1

47

If you define the characteristic polynomial of a matrix $A$ to be $\det(xI - A)$, then for $M$ invertible we have:

$\det(xI - M^{-1} A M)$

$= \det(M^{-1} xI M - M^{-1} A M)$

$= \det(M^{-1} (xI-A) M)$

$= \det (M^{-1}) \det(xI-A) \det(M)$

$=\det(xI - A)$

  • 0
    That was simple. Very simple. Thank you!2011-12-02
  • 4
    This proof should be standard in any text, in order to even *define* the characteristic polynomial of a vector space endomorphism (as opposed to that of a matrix). Certainly you don't want to have to refer to eigenvalues and algebraic multiplicities in order to define the characteristic polynomial of an endomorphism.2011-12-02
  • 0
    @Marc, how would you define determinants of general vector space endomorphisms?2011-12-02
  • 2
    @Henning Makholm: Maybe my comment was not so clear. I would define the characteristic polynomial of a matrix in the usual way, then prove that it is invariant under similitude, which allows defining the characteristic polynomial of a vector space endomorphism as that of its matrix in any basis. One can define the determinant of general vector space endomorphisms without using bases, but I don't think that is very useful for characteristic polynomials, since there one needs a determinant over $K[X]$, not over a field.2011-12-02
  • 0
    @Marc, then your remarks are implicitly limited to _finite-dimensional_ vector spaces, or am I missing something?2011-12-02
  • 0
    @Henning Makholm: Yes I was assuming finite dimensional, I thought that was clear in the context of characteristic polynomials. The point of (my use of) the term endomorphism is just that it doesn't require a basis. It's the same as linear transformation the OP mentions, except that it also indicates from a space to itself. Sorry if my terminology confused you.2011-12-02
  • 1
    I don't understand where the $M^{-1}xIM$ comes from in the second step of the proof.2012-06-24
  • 2
    @RobertS.Barnes, $xM$ commutes with the identity matrix $I$.2012-06-24
  • 0
    So cool @lhf -- thanks :-)2015-11-20
  • 0
    @MarcvanLeeuwen Won't the simplest construction work? K naturally embeds in K[X], and any K-module can be enlarged to a K[X]-module (by taking the tensor product, I'm not sure), and an endomorphism A induces an endomorphism of this K[X]-module, and then the determinant of $Ix-A$ will be the characteristic polynomial.2016-10-21