3
$\begingroup$

If $A = \frac{1}{2} \times \begin{pmatrix} -1& -\sqrt3 & 0 \\ -\sqrt3& 1 &0 \\ 0 & 0 & 0 \end{pmatrix} \text{ and } E = \frac{1}{2} \times \begin{pmatrix} 1& \sqrt3 & 0 \\ -\sqrt3& 1 &0 \\ 0 & 0 & 1 \end{pmatrix}$ then show that $E^{-1}AE$ is diagonal matrix.

Can we prove this without actually computing the the inverses and the product? The actual problem is consist of four options of matrices and I think it's a bit tedious for objective type to do the actual computation, so I am more interested in using some property for the same. I know about $E^{T}AE$ depend on whether A is symmetric or skew-symmetric but is there anything like this for inverses?

  • 0
    The two matrices are very similar to rotation matrices ( https://en.wikipedia.org/wiki/Rotation_matrix ). This may give you a clue.2014-09-02

2 Answers 2

4

$E^{-1} A E$ will be diagonal if the columns of $E$ are eigenvectors of $A$ (since then $AE=ED$).

  • 1
    @Debanjan: They are parallel if one of them is a scalar multiple of the other; that is if $(w_1,w_2)=(\lambda v_1, \lambda v_2)$. Your example is a bit confusing, because what you call "eigenvector of this matrix" is a list consisting of *two* eigenvectors. If you take the matrix times the first of these vectors, then you will get $1+\sqrt{6}$ times that first vector, and if you take the matrix times the second of these vectors, then you will get $1-\sqrt{6}$ times that second vector. (It's easier to see if you take a matrix with integer eigenvalues instead, say `{{1,-2},{1,4}}`.)2011-04-10
3

This is essentially Hans Lundmark's answer, without explicit mention of eigenvectors.

I will ignore the factors of $\frac{1}{2}$, since in the computation of $E^{-1}AE$, they will simply amount to multiplying by $\frac{1}{2}$.

Since $E$ is invertible, its columns form a basis for $\mathbb{R}^3$; call this basis $\beta$.

Every vector in $\mathbb{R}^3$ can be written as a linear combination of elements of $\beta$. We can keep track of this by the use of "coordinate vectors": if $\mathbf{v}$ is a vector, and $\mathbf{v} = \alpha_1\left(\begin{array}{r}1\\ -\sqrt{3}\\0\end{array}\right) + \alpha_2\left(\begin{array}{r}\sqrt{3}\\1\\0\end{array}\right) + \alpha_3\left(\begin{array}{r}0\\0\\1\end{array}\right)$ then since we know the vectors of $\beta$, in order to tell you who $\mathbf{v}$ is, I only need to tell you the values $\alpha_1$, $\alpha_2$, and $\alpha_3$: you know how to use them to get $\mathbf{v}$. This is called the "coordinte vector" of $\mathbf{v}$ with respect to $\beta$, $[\mathbf{v}]_{\beta} = \left(\begin{array}{c}\alpha_1\\ \alpha_2\\ \alpha_3\end{array}\right).$ The usual expression of vectors is in fact the coordinate vector with respect to the standard basis.

When you compute $E^{-1}AE$, you are computing a matrix that, when multiplied by a coordinate vector $[\mathbf{v}]_{\beta}$, will give you the coordinate vector of $A\mathbf{v}$; to see this, notice that if $[\mathbf{v}]_{\beta}=(\alpha_1,\alpha_2,\alpha_3)^T$, then $E[\mathbf{v}]_{\beta} = \mathbf{v}$: $\left(\begin{array}{rrr} 1 & \sqrt{3} & 0\\ -\sqrt{3} & 1 & 0\\ 0 & 0 & 1 \end{array}\right)\left(\begin{array}{c} \alpha_1\\ \alpha_2\\ \alpha_3 \end{array}\right) = \alpha_1\left(\begin{array}{r} 1 \\ -\sqrt{3} \\ 0\end{array}\right) + \alpha_2\left(\begin{array}{r} \sqrt{3}\\ 1\\ 0\end{array}\right) + \alpha_3\left(\begin{array}{r}0\\ 0\\ 1\end{array}\right) = \mathbf{v}.$

It is not hard to check that $E^{-1}\mathbf{v}$ will in fact give you the coordinate vector of $\mathbf{v}$ relative to $\beta$.

So one way to think about $E^{-1}AE$ is the following: you are given a coordinate vector $[\mathbf{v}]_{\beta}$; multiplying by $E$ gives you $\mathbf{v}$. Multiplying by $A$ gives you $A\mathbf{v}$; multiplying by $E^{-1}$ gives you the coordinate vector of $A\mathbf{v}$, that is $[A\mathbf{v}]_{\beta}$.

When is a $3\times 3$ matrix $M$ a diagonal matrix? It is a diagonal matrix if and only if $M\mathbf{e}_1 = \lambda_1\mathbf{e}_1$, $M\mathbf{e}_2 = \lambda_2\mathbf{e}_2$, and $M\mathbf{e}_3 = \lambda_3\mathbf{e}_3$, where $\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3$ is the standard basis for $\mathbb{R}^3$, and $\lambda_1,\lambda_2,\lambda_3$ are scalars; this because $M\mathbf{e}_i$ is just the $i$th column of $M$.

So, under what conditions will $E^{-1}AE$ be a diagonal matrix? If the vectors $\mathbf{v}_1$, $\mathbf{v}_2$, and $\mathbf{v}_3$ whose coordinate vectors are $\mathbf{e}_1$, $\mathbf{e}_2$, and $\mathbf{e}_3$ are mapped by $A$ to a scalar multiple of themselves.

But what vector $\mathbf{v}_1$ has coordinate vector $[\mathbf{v}_1]_{\beta}=\mathbf{e}_1$? The vector $\mathbf{v}_1 = 1\left(\begin{array}{r}1\\ -\sqrt{3}\\ 0\end{array}\right) + 0\left(\begin{array}{r} \sqrt{3}\\1\\0\end{array}\right) + 0\left(\begin{array}{c}0\\0\\1\end{array}\right) = \left(\begin{array}{r}1\\-\sqrt{3}\\0\end{array}\right).$ That is, the first column of $E$. Likewise, the vector $\mathbf{v}_2$ with $[\mathbf{v}_2]_{\beta} = \mathbf{e}_2$ is the second column of $E$, and $\mathbf{v}_3$ is the third column of $E$.

So, $E^{-1}AE$ is diagonal if and only if $A\mathbf{v}_1=\lambda_1\mathbf{v}_1$, $A\mathbf{v}_2=\lambda_2\mathbf{v}_2$, and $A\mathbf{v}_3 = \lambda_3\mathbf{v}_3$, where $\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3$ are the columns of $E$, and $\lambda_1,\lambda_2,\lambda_3$ are some scalars.

To check this, simply do the product $AE$ (which are the two matrices you have in any case), and compare the columns of the result to the columns of $E$. If the $i$th column of $AE$ is a scalar multiple of the $i$th column of $E$, for $i=1,2,3$, then $E^{-1}AE$ will be a diagonal matrix. Note: This all assumes that $E$ is in fact invertible.

This leads to the definition:

Definition. Let $A$ be an $n\times n$ matrix with coefficients in $F$. A vector $\mathbf{v}\in F^n$ is an eigenvector of $A$ if and only if $\mathbf{v}\neq\mathbf{0}$, and there exists a scalar $\lambda$ such that $A\mathbf{v}=\lambda\mathbf{v}$.

(We ask $\mathbf{v}\neq\mathbf{0}$ because $\mathbf{0}$ always satisfies the equation with every scalar, and it will never show up as a column of an invertible matrix).

So, given an $n\times n$ matrix $A$ and an invertible $n\times n$ matrix $E$, $E^{-1}AE$ is a diagonal matrix if and only if each column of $E$ is an eigenvector of $A$.

Checking if a given vector is an eigenvector is just a matter of multiplying by $A$. So this is really just a matter of computing $AE$: if the $i$th column of $AE$ is a scalar multiple of the $i$th column of $E$ for $i=1,2,3$, then $E^{-1}AE$ is diagonal. The factors of $\frac{1}{2}$ don't matter for this, they will only change the scalar if it exists.