2
$\begingroup$

This is not an assignment or anything from my book, just a pure interest. If I wanted to show this, wouldn't I have to say that the square matrix can be reduced into the identity square matrix and then say that the determinant is not zero. After I say that and use the fact that the only element in the kernel is the zero vector, could I conclude that the matrix is invertible?

3 Answers 3

6

I would do it using the rank-nullity theorem and properties of linear maps. If $Ker(A) = {0}$ Then the linear map which the matrix represents is injective and has nullity zero. Hence the rank of the matrix is $n$ (if the matrix is $n\times n$,) which means that it is a surjective map. Since the map is linear and a bijection, it is an isomorphism which means it has an inverse. Hence the matrix must be invertible.

(Of course, to say this rigourously you'd have to explicitly state where the map is from and to and all the rest of it, but if you know a little bit about it it's not hard to do.)

1

You don't need determinants for this.

Imagine doing Gauss-Jordan elimination on the matrix $A$. The result is a matrix $GA$ in reduced row echelon form, where $G$ is invertible. If $GA=I$, then $A$ is invertible (and $G$ is its inverse). Otherwise there is a non-pivot column, and you can use that to construct a nonzero column vector $X$ such that $(GA)X=0$. But since $G$ is invertible, $G(AX)=0$ implies $AX=0$ so $X$ is in the kernel of $A$, and the kernel is therefore nontrivial.

0

I was wondering the same thing (about square matrices). Essentially I was wondering:

$ \text{if } N(A)=\{0\} \implies A^{-1} \text{ exists} $

this is not as trivial to prove as the converse i.e. that any invertible (square) matrix has has a null space that is only the zero vector. The converse there nearly isn't anything to be done, if $Ax = 0$ then $x = A^{-1}0 = 0$ concludes the proof that only $x=0$ gives the wanted result.

For the original question the only way I thought of doing it was as follows. First since you know that the null space is only the zero element then you know only the zero vector gives:

$ Ax = 0$

only the all zero combination of the columns gives 0. Thus, the columns are independent which means that also the rows since its squared. Now the Gaussian elimination algorithm is going to produce $n$ pivots. We know from determinants that the inverse is expressed as follows:

$ A^{-1} = \frac{1}{detA} C^T $

where $C$ is the cofactor formula. Though what matters is the determinant. We know that the determinant is:

$ det A = \pm (\text{product of pivots})$

this is the key. If the columns were not independent then some pivot would have been zero. This would have yielded that the determinant is zero and the matrix is not invertible. Thus, it was crucial that the square matrix had independent columns so it could have all its pivots be non-zero and this have a inverse since its determinant would be non-zero.