4
$\begingroup$

$1)$ For each distinct real eigenvalue $\lambda$ of a $3 \times 3$ matrix $A$, it turns out that the cross product of the transpose of any two linearly independent rows of $A-\lambda I$ gives a corresponding eigenvector (and thus easily the corresponding eigenspace, since in this case the eigenspace is an eigenline). But why does this method work?

$2)$ I think the above may be generalisable to any $3\times 3$ matrix with only real eigenvalues: Substitute an eigenvalue of $A$ into $A-\lambda I$. Then take the cross products of the transpose of any two pairs of rows of $A-\lambda I$. Only two possibilities exist:

$(a)$ If only one is nonzero, that gives a corresponding eigenvector and hence easily the eigenspace.

($b$)If both are zero, then the eigenspace is the plane orthogonal to any row of $A-\lambda I$. Is this generalisation valid, and if so, why does the method work?

$3)$ How about for the final case whereby $2$ complex eigenvalues exist??

  • 0
    I was hinting you to look at the case of a general triangular matrix, since a matrix with distinct real eigenvalues is similar to a triangular matrix...2012-08-04

4 Answers 4

6

If $x$ is an eigen vector with corresponding eigen value $\lambda$, then $(A - \lambda I)x = 0$, and so $x$ lies in the null space of $A - \lambda I$. Since the null space is perpendicular to the subspace spanned by any two linearly independent rows of $A - \lambda I$, the cross product will give you this vector.

  • 0
    So, to answer my own Question 1 (guided by your hint): the two essential points as to why the technique works are: (i) $λ$ has multiplicity 1, and (ii) $Nul(A−λi)$ is orthogonal to $Row(A−λi)$.2012-08-06
2

It doesn't seem correct to me:

Take $ A= \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -6 & 5 & 2 \end{bmatrix} = \begin{bmatrix} a_1^T \\ a_2^T \\ a_3^T \end{bmatrix}.$ $A$ has real eigenvalues $\{-2,1,3\}$. $a_1 \times a_2 = (1, 0, 0)^T$, but $A (a_1 \times a_2) = (0,0,-6)^T$, which is clearly not an eigenvector.

It is not a left eigenvector either, $(a_1 \times a_2)^T A = (0,1,0)^T$.

Here is the answer to the modified question:

Let $B = A-\lambda I = \begin{bmatrix} b_1^T \\ b_2^T \\ b_3^T \end{bmatrix}$, where $\lambda$ is an eigenvalue of $A$ and suppose $b_1,b_2$ are linearly independent. Since $\det B =0$, we have $b_3 \in \mathbb{sp} \{b_1,b_2\}$. The vector $b_1 \times b_2 $ is orthogonal to $b_1,b_2$, and hence $b_3$ since it is in $\mathbb{sp} \{b_1,b_2\}$. Consequently $B (b_1 \times b_2) = 0$, or equivalently, $(A-\lambda I)(b_1 \times b_2) = 0$, from which the answer follows.

  • 0
    Hi, no your answer was enlightening but Lieven's tipped me to read up on the fundamental subspaces and their orthogonality, and from there I found the intuition that I needed. Although now I think it may be bad etiquette to change a unselect an answer that was perfectly fine, especially since I think your answer was deliberately non-technical for my benefit. My sincere apologies!2012-08-05
1

Now that you have corrected your question, it is a special case of the standard relation for the adjoint matrix, sometimes called the adjunct matrix or other names. Anyway, beginning with some $n$ by $n$ matrix $B,$ we calculate certain $n-1$ by $n-1$ subdeterminants called cofactors and throw in a transpose and some judicious $\pm 1$ factors to create a matrix $\mbox{adj} \; B$ with the property $ B \;\cdot \mbox{adj} \; B = \mbox{adj} \; B \cdot B = (\det B) \cdot I. $ If we insert your $ B = A - \lambda I $ where $\lambda$ is an eigenvalue of $A,$ we have $\det B = 0.$ Your construction with the cross product amounts to taking any column of $ \mbox{adj} \, (A - \lambda I), $ since $ (A - \lambda I) \cdot \mbox{adj} \, (A - \lambda I) = 0. $

EDIT: note that the field containing the entries of $A$ and/or the eigenvalues does not matter much. Furthermore the relation to cross product is the well-known description of the cross product as cofactors, that is three 2 by 2 determinants.

0

This is wrong. The cross product of two rows is orthogonal to those two rows, and can thus only be an eigenvector if its corresponding two components are zero, and they need not be.

  • 0
    Sorry I made a silly mistake. I meant to say to take the two LI rows from $A−λI$ instead. Please refer to the edited question.2012-08-04