4
$\begingroup$

I'm curious if there's any numerical way of directly finding the eigenvectors with eigenvalue 0.

If I didn't have to do it directly, I would probably do it like this in pseudocode:

A // a rank deficient n*n square matrix

A := [A' I] //augment transpose of A with identity matrix of same size so A is n*2n

Q R := qr(A) //QR factorization on A returns Q and R matrix

return Q(n+1) ... Q(2n) // where Q(i) is the i-th column

And the Q(i) column vectors that exceed a certain numerical threshold would be taken to be in the nullspace of $A$ which would be equivalent to the eigenvectors with eigenvalue 0.

I've thought of the inverse power method but it won't work as far as I can tell.

So, is there a way to find such eigenvalues "directly"? (directly in the numerical sense that I don't have to find vectors not in the space spanned by those eigenvalues to get there)

ETA: My original question is more general, but I was mostly thinking of graph Laplacian matrices when I asked this. So given a symmetrix adjacency matrix $W$ and a degree matrix $D = \textrm{diag}(W \mathbf{1})$ where $\mathbf{1}$ is a column vector of ones, the graph Laplacian is defined as $ \mathcal{L} = D - W $ It doesn't take much effort to show that $\mathbf{1}$ is in the nullspace of this matrix. Furthermore, any other vectors that are in its nullspace (or are eigenvectors that have zero eigenvalue) will reveal the components in the corresponding graph. So I was wondering what would be a decent numerical way of finding these eigenvectors.

  • 0
    @lhf I'm mostly self-taught when it comes to this stuff, so there are gaps in what I know. And I did not know that Gaussian elimination could be used in that way.2011-04-27

0 Answers 0