15
$\begingroup$

I'm learning Linear Algebra using MIT's Open Courseware Course 18.06

Quite often, the professor says "... assuming that the matrix is invertible ...".

Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction.

Is there a more efficient way? If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this?

  • 2
    Note that if your matrix is fully symbolic, then the size of the 'answer' for the determinant is indeed O(n!).2010-08-04

3 Answers 3

15

Gauss-Jordan elimination can be used to determine when a matrix is invertible and can be done in polynomial (in fact, cubic) time. The same method (when you apply the opposite row operation to identity matrix) works to calculate the inverse in polynomial time as wel.

  • 5
    That assumes that the matrix coefficients are of uniformly bounded size and do not grow at all when doing GJ. This is false for most rings! In other words, you are assuming that coefficient operations are O(1), which is almost always false in bit-complexity [exception: $Z_q$].2010-08-04
13

A matrix is invertible iff its determinant is non-zero.

There are algorithms which find the determinant in slightly worse than O(n2)

  • 3
    See my comment on Akhil Matthew's answer -- these are not bit-complexity results, these are results assuming O(1) coefficient operations. For example, this does not work for random integer matrices (because of coefficient growth).2010-08-04
6

Computing the determinant and Gaussian elimination are both fine if you are using exact computations, for instance if the entries of your matrix are rational numbers and you are using only rational numbers during the computations. The disadvantage is that the numerator and denominator can get very large indeed. So the number of operations may indeed be O(n2.376) or O(n3), but the cost of every addition and multiplication gets bigger as n grows because the numbers get bigger.

This is not an issue if you are using floating point numbers, but then you have the problem that floating point computations are not exact. Some methods are more sensitive to this than others. In particular, checking invertibility by computing the determinant is a bad idea in this setting. Gaussian elimination is better. Even better is to use the singular value decomposition, which will be treated towards the end of the MIT course.