2
$\begingroup$

How does $$\mathbf P^{-1}\mathbf A \cdot \mathbf I \cdot \mathbf P = \mathbf A\mathbf P^{-1}\cdot \mathbf P = \mathbf A\cdot \mathbf I\ ?$$ I thought $\mathbf P^{-1}$ couldn't be "moved around" within an equation.

It's from a theorem that similar matrices have the same eigenvalues. Let A and B be similar. Then B = $\mathbf P^{-1}$ AP, for some nonsingular matrix P. We prove that A and B have the same characteristic polynomials, pA(λ) and pB(λ), respectively. We have: pB(λ) = det(λIn - B) = det(λIn - $\mathbf P^{-1}$ AP) = det($\mathbf P^{-1}$ λInP - $\mathbf P^{-1}$ AP).

  • 3
    Could you provide some context? In general, you are correct that $P^{-1}AIP = P^{-1}AP$ would not equal $A$, but under some circumstances it may. (I'm also assuming $I$ is the identity, but perhaps this is not so?)2012-06-24
  • 2
    See? Context makes all the difference. It's not true for an arbitrary matrix, but it is definitely true for a scalar multiple of the identity, since $\lambda I C = C\lambda I$ for all matrices $C$.2012-06-24
  • 0
    Alright, thanks Arturo!2012-06-24

1 Answers 1

2

If $A = P^{-1}BP$, then

$\begin{align*} p_A(x) &= \det(xI - A) = \det(xI - P^{-1}BP) = \det (P^{-1}xIP - P^{-1}BP) = \det \big(P^{-1}(xI - B)P\big) = \\ &= \det P^{-1} \cdot \det(xI - B) \cdot \det P = \det(xI - B) \cdot \det P^{-1} \cdot \det P= \\ &= \det(xI - B) (\det P)^{-1} \cdot \det P = \det(xI - B)= p_B(x) \end{align*} $

Note that we don't really move $P^{-1}$ around; we apply the product property of determinants: $\det(AB) = \det A \cdot \det B$, and since the determinant is a real number, you can move it around because the real numbers are associative and commutative with respect to multiplication.

Now, why does $xI = P^{-1}xIP$?

We call scalar matrices all matrices of the form $kI$, where $k$ is a scalar (note that these matrices have zeroes everywhere except on the diagonal, which has all its entries equal to $k$). It turns out that scalar matrices commute with every matrix, that is, for any matrix $A$, $(kI)A = A(kI)$. (Actually, there are no other matrices that commute with all the matrices.)

Let's prove this equality entrywise; $n$ stands for the appropriate dimension of the matrix. The important thing to note in both cases is that the matrix $kI$ has all zeroes except in the diagonal, so the sums reduce to only one term:

$\displaystyle \big((kI)A\big)_{ij} = \sum_{l=1}^n (kI)_{il}A_{lj} = kA_{ij}$ (if $l \neq i$, then $(kI)_{il} = 0$; the only remaining term is with $l = i$)

$\displaystyle \big(A(kI)\big)_{ij} = \sum_{l=1}^n A_{il}(kI)_{lj} = A_{ij}k$ (if $l \neq j$, then $(kI)_{lj} = 0$; the only remaining term is with $l = j$)

Then we have $\big((kI)A\big)_{ij} = \big(A(kI)\big)_{ij}$ for all $i$, $j$, so $(kI)A = A(kI)$. And in our particular case, since $xI$ is a scalar matrix, $P^{-1}(xI)P = (xI)P^{-1}P = (xI)I = xI$.

  • 0
    Yes, but I don't know how xI = $\mathbf P^{-1}$ xIP..2012-06-24
  • 1
    In that case, we say that *scalar matrices* (scalar multiples of the identity matrix) *commute* with all other matrices. Since $xI$ is a scalar matrix, it commutes with $P^{-1}$ and $P$, so: $$ xI = xIP^{-1}P = P^{-1}xIP$$2012-06-24
  • 0
    Oh, I see. Thank you!2012-06-24
  • 0
    @Benson I've added a proof of that fact.2012-06-24