2
$\begingroup$

I have most of the times come across authors who given a characteristic matrix, proceed to find its eigenvalues by solving for the $\left | A - \lambda I \right |=0 $ and then solving the characteristic polynomial.

While playing around with the concept, I reasoned, that $A-\lambda I$ will always have a rank less than 'n' where $n$ is the number of variables for $\mathbf{X}$ to have non trivial solution.

Thus I went about solving for the echelon form of the coefficient matrix $A-\lambda I$ in the way, proving that in the echelon form, one of the rows must vanish to satisfy the rank condition, and thus equating the last row, last column to 0, which almost always gave me the characteristic polynomial.

My question is:

  • has anyone tried solving it this way?

  • does this way offer any computational advantage over calculation of determinants when 'n' increases?

Any thoughts regarding the same.

2 Answers 2

0

Reducing a matrix to upper triangular form by repeatedly adding multiples of one row to another is one of the methods for computing determinants, so it should work here. Note that you don't actually need a reduced echelon form, or even an echelon form, to obtain the determinant in this fashion.

Swapping rows will flips the sign of the determinant. Multiplying a row by a scalar will multiply the determinant by that scalar. These are easy to undo at the end, since you were going to divide out by the leading coefficient to make the characteristic polynomial monic anyways.

But you need to be careful of multiplying or dividing by polynomials. You'll get things wrong unless you luck out and make compensating errors. Really, the best way to be careful of it is to simply not do it.

0

Your reasoning is correct.

Numerically, however, computing the row-echelon form is an unstable process (Gaussian elimination).

  • 0
    You are mixing things: just because you have some closed form formula, it does not mean that you don't need numerical methods. E.g. matrix inversion.2012-05-27