$1)$ For each distinct real eigenvalue $\lambda$ of a $3 \times 3$ matrix $A$, it turns out that the cross product of the transpose of any two linearly independent rows of $A-\lambda I$ gives a corresponding eigenvector (and thus easily the corresponding eigenspace, since in this case the eigenspace is an eigenline). But why does this method work?
$2)$ I think the above may be generalisable to any $3\times 3$ matrix with only real eigenvalues: Substitute an eigenvalue of $A$ into $A-\lambda I$. Then take the cross products of the transpose of any two pairs of rows of $A-\lambda I$. Only two possibilities exist:
$(a)$ If only one is nonzero, that gives a corresponding eigenvector and hence easily the eigenspace.
($b$)If both are zero, then the eigenspace is the plane orthogonal to any row of $A-\lambda I$. Is this generalisation valid, and if so, why does the method work?
$3)$ How about for the final case whereby $2$ complex eigenvalues exist??