2
$\begingroup$

If the row echelon form of a square matrix has no zero row, it is invertible. Otherwise, it is singular.

Why?

If the row echelon form has a zero row, in a linear system, it has either no solution or infinitely many solutions. So, is invertibility linked to having only one solution?

Is there a geometrical interpretation for my question?

  • 0
    A zero-row in the row echelon form means that the (square) matrix has not full rank. A square matrix with size $n\times n$ is invertible if and only if its rank is full (rank=$n$)2017-02-22
  • 0
    @Peter Hi Peter. I have not learned ranks yet. I am looking for an answer that involves geometry or solution sets, if it is possible.2017-02-22
  • 0
    Geometrical interpretation : If a square matrix with size $n\times n$ has full rank , then the row vectors as well as the column vectors form a basis. They span the full $\mathbb R_n$. For $n=2$ and $n=3$ this can be visualized.2017-02-22
  • 0
    Hi i dont know if i can link vidoes but this one does give a pretty good intuitive understanding .https://www.youtube.com/watch?v=uQhTuRlWMxw2017-02-22

2 Answers 2

3

Excellent. Your request for a geometric interpretation shows me that you are on the right track in learning linear algebra! (Well, at least visualizing the standard 1-3 dimensions)

Consider the Reduced Row Echelon Form (RREF) of a matrix A, it concisely describes some of the subspace information associated with A.

The RREF tell us:

  • rank : number of basis vectors in the column space/range
  • nullity : number of basis vectors in the null space/kernel
  • Invertibility/Linear independence : Whether the null space is trivial or not

The null space being trivial (i.e, consisting of only an appropriate null vector) implies that the column space of A occupies the entirety of it's dimension(equal to the column count of A) and that there is no linear combination of any vectors in it's range that reduce to 0 vector.

The process of matrix inversion is supposed to find a subspace which when multiplied with A gets projected to the appropriate identity matrix.

If there is any linear combination of columns of A that reduces to 0, then it cannot be reversed to map onto it's original linear combination, which means that the vector is nullified. (Mapped to the 0 vector). This is exactly what the rank of a matrix succinctly describes with mathematical beauty.

So, such linear vector combinations of non-invertible matrices are consumed by it's null space/kernel!

The exact same can be witnessed and verified on the column space and null space of the non-invertible matrix A' (which are incidentally, NOT coincidentally, the row space and left-null space of A)

0

Just think about the Gauss-Jordan method of finding the inverse. To find the inverse of a matrix $A$, that is , $A^{-1}$, we write the augmented matrix $[A; I]$ where $I$ is the unit matrix of the same order as $A$.

Just think what will happen when $\det A =0$. The elementary row operations performed produce one or more rows of zero at the bottom so that $A $ cannot be reduced to $I_n $. In fact, in this case, $A $ reduces to its normal form $\begin {bmatrix} I_r & 0\\ 0 & 0\end {bmatrix} $ since rank$_A < n $.

Hope it helps.