1
$\begingroup$
  1. For a complex square matrix $M$, a maximal set of linearly independent eigenvectors for an eigenvalue $\lambda$ is determined by solving $$ (M - \lambda I) x = 0. $$ for a basis in the solution subspace directly as a homogeneous linear system.
  2. For a complex square matrix $M$, a generalized eigenvector for an eigenvalue $\lambda$ with algebraic multiplicity $c$ is defined as a vector $u$ s.t. $$ (M - \lambda I)^c u = 0. $$ I wonder if a generalized eigenbasis in Jordan decomposition is also determined by finding a basis in the solution subspace of $(M - \lambda I)^c u = 0$ directly in the same way as for an eigenbasis? Or it is more difficult to solve directly as a homogeneous linear system, and some tricks are helpful?

Thanks!

1 Answers 1

1

Look at the matrix $$M=\pmatrix{1&1\cr0&1\cr}$$ Taking $\lambda=1$, $c=2$, Then $(M-\lambda I)^c$ is the zero matrix, so any two linearly independent vectors will do as a basis for the solution space of $(M-\lambda I)^cu=0$. But that's not what you want: first, you want as many linearly independent eigenvectors as you can find, then you can go hunting for generalized eigenvectors.

  • 0
    +1. Thanks! So the first method is to calculate $(M-\lambda I)^c$ first and then find its solution basis. The second method is to find as many linearly independent eigenvectors of $M$ as we can, and then go from there to find generalized eigenvectors by some way (which is unknown to me yet). I wonder why the first method is "not what you want"? Why would you use the second method?2012-11-26
  • 0
    No. Look at the example. If you find a basis for the nullspace of $(M-\lambda I)^c$, the chances are it won't include any eigenvectors. Look at the example. So first find a basis for the nullspace of $M-\lambda I$. Then for each $v$ in that basis, find solutions of $(M-\lambda I)w=v$. Those will be generalized eigenvectors. You may have to go further and find solutions of $(M-\lambda I)x=w$ for each generalized eigenvector $w$, and so on; continue until you have enough linearly independent vectors to be a basis for the nullspace of $(M-\lambda I)^c$. Please look at the example.2012-11-26
  • 0
    Thanks! (1) The reason for avoiding "if you find a basis for the nullspace of $(M−λI)^c$, the chances are it won't include any eigenvectors" is that finding a generalized eigenbasis so that the matrix is changed to a Jordan form? (2)What will the matrix be changed to under an arbitrary generalized eigenbasis of the nullspace of $(M−λI)^c$, which does not necessarily include any eigenvectors?2012-11-27
  • 0
    Tim, why don't you try it, and see. Take my matrix $M$, take any two linearly independent vectors, making sure neither one is an eigenvector, and see what happens to the matrix using that basis. I don't think you'll ever understand this stuff, until you take an example apart, on your own, to see what makes it tick.2012-11-27
  • 0
    Gerry, thanks. your $M$ has only one eigenvalue $1$ with algebraic multiplicity $2$ and geometric multiplicity $1$. its eigenvectors are $c [1, 0]^T, \forall c \neq 0$. Let $P = [1, 1; 0, 1]$, and then P's columns are not eigenvectors of $M$, and $P MP^{-1} = [0, 1; -1, 2]$ which is not a Jordan form, and what form is it? In general, under a generalized eigenbasis, what kind of special matrices can a matrix be similar to?2012-11-27
  • 0
    In general, I don't know. In this example, since $P$ can be any invertible matrix, $P^{-1}MP$ can be any matrix similar to $M$. And that means it can be any matrix with trace $2$ and determinant $1$, other than the identity matrix.2012-11-27