3
$\begingroup$

Now I have no problem getting an inverse of a square matrix where you just calculate the matrix of minors, then apply matrix of co-factors and then transpose that and what you get you multiply by the determinant of the original matrix.

Now on the last test we didn't get a square matrix; we had to find a inverse and determinant of a $3\times 4$ matrix and I was lost. $\begin{pmatrix} 2 & 3 & 4 & 5\\ 2 & 3 & 5 & 6\\ 1 & 2 & 0 & 8 \end{pmatrix}$

How would you calculate inverse of such a matrix. The determinant is obviously 0 since you need a square matrix to calculate a determinant.

  • 5
    The determinant is _not_ "obviously 0". As you correctly observe, non-square matrices do not have determinants. That doesn't mean the determinant is 0 -- it means that _there is no determinant at all_.2012-02-01

2 Answers 2

7

Actually, there is not such thing as a determinant of a non-square matrix, so it is false that "the determinant is obviously $0$" (it is undefined, not equal to zero, just like a limit that does not exist is not "equal to $0$", it's just undefined).

Now, when the coefficients lie in a field (the rationals, the reals, the complex numbers, for instance), non-square matrices do not have two-sided inverses. This is an easy application of the Rank-Nullity Theorem: if $A$ is $n\times m$, $n\neq m$, and $B$ is $m\times n$, then either $\mathrm{Null}(A)\gt 0$ or $\mathrm{Null}(B)\gt 0$ (which ever has more columns than rows). But if, say, $\mathrm{Null}(B)\gt 0$, then there is a nonzero vector $\mathbf{x}$ for which $B\mathbf{x}=\mathbf{0}$, and then $(AB)\mathbf{x}=A\mathbf{0}=\mathbf{0}$, so $AB$ cannot be the $n\times n$ identity. A symmetric argument holds when $\mathrm{Null}(A)\gt 0$.

On the other hand, if $A$ is $n\times m$, and the matrix has "full rank" (that is, $\mathrm{rank}(A) = \min(n,m)$), then either there is (at least one) $m\times n$ matrix such that $BA=I$, or there is (at least one) $m\times n$ matrix $C$ such that $AC=I$. These matrices are "one-sided inverses" of $A$, and if $n\neq m$, then there will only be inverses on one side, and there will be many inverses.

The simplest way to see this is to think of $A$ as a linear transformation from $\mathbb{R}^m$ to $\mathbb{R}^n$. Suppose first that $n\lt m$ (as in your case), and the rank is $n$. That means that as a linear transformation, $A$ is onto. In particular, there are vectors $\mathbf{v}_1,\ldots,\mathbf{v}_n\in\mathbb{R}^m$ such that $A\mathbf{v}_i = \mathbf{e}_i$, where $\mathbf{e}_i$ is the $i$th standard vector (a $1$ in the $i$th coordinate, zeros everywhere else). Now let $C$ be the $m\times n$ matrix that has $\mathbf{v}_i$ in the $i$th column. Then $B\mathbf{e}_i = \mathbf{v}_i$, so $AC(\mathbf{e}_i) = A\mathbf{v}_i = \mathbf{e}_i$. Since this holds for $\mathbf{e}_1,\ldots,\mathbf{e}_m$, it follows that $AC$ is the $m\times m$ identity.

However, there are many possible choices of $\mathbf{v}_i$: since $n\lt m$, then there are many vectors that map to $\mathbf{e}_i$; each different choice of vectors will give you a different $C$. Of course, as in the argument above, there does not exist any matrix $B$ such that $BA=I$.

Now suppose that $A$ is $n\times m$ and $m\lt n$, with $\mathrm{rank}(A)=m$. We view $A$ as a linear transformation from $\mathbb{R}^m$ to $\mathbb{R}^n$. Since $\mathrm{rank}(A)=m$, then $\mathrm{nullity}(A)=0$; so $A$ is one-to-one. that means that, since the vectors $\mathbf{e}_1,\ldots,\mathbf{e}_m$ of $\mathbb{R}^m$ are linearly independent, their images $A\mathbf{e}_1,\ldots,A\mathbf{e}_m$ are linearly independent in $\mathbb{R}^n$. So we can complete this list to a basis of $\mathbb{R}^n$, $A\mathbf{e}_1,\ldots,A\mathbf{e}_m,\mathbf{w}_{m+1},\ldots,\mathbf{w}_n$. Now let $B$ be the matrix that sends $A\mathbf{e}_i$ to $\mathbf{e}_i$, and maps $\mathbf{w}_j$ to any vector in $\mathbb{R}^m$ that you want (such a matrix always exist, for any choice of $\mathbf{w}_j$, and for any choice of images). Then $(BA)\mathbf{e}_i = B(A\mathbf{e}_i) = \mathbf{e}_i$ for $i=1,\ldots,m$, so $BA$ is the $m\times m$ identity matrix. Again, different choices of $\mathbf{w}_j$ and/or different choices of their images give different matrices $B$, so there are lots of matrices $B$ that work. $B$ is a left inverse of $A$.

In this situation, there does not exist any matrix $C$ such that $AC=I$.

So if $A$ (with coefficients in, say, $\mathbb{R}$, $\mathbb{Q}$, or $\mathbb{C}$) is not a square matrix, then it does not have an inverse, but it will have "one-sided inverses" on one side only.


(What follows is likely to be far beyond what you've studied, so you may ignore it if you want)

However, if the entries of the matrix lie in a noncommutative structure (coefficients in a non-commutative ring, coefficients in a division ring), then it is possible to find non-square matrices that have inverses. For example,let $R$ be a ring. We can view a map $R\to R\oplus R$ as a $2\times 1$ matrix with coefficients in $\mathrm{Hom}(R,R)$, and we can view maps $R\oplus R\to R$ as $1\times 2$ matrices with coefficients in $\mathrm{Hom}(R,R)$. Since there are rings $R$ with unity $1_R\neq 0$ such that $R\cong R\oplus R$, then such an isomorphism gives us a pair of matrices, one $1\times 2$ and one $2\times 1$, with coefficients in the noncommutative ring $\mathrm{Hom}(R,R)$, which are two-sided inverses of each other.

0

In your case, the pseudo inverse is $ \frac{1}{711}\left( \begin{matrix} 598 & -474 & -21\\ 885 & -711 & -18\\ -415 & 474 & -96\\ -296 & 237 & 96\\ \end{matrix} \right), $ from here.

  • 0
    @ChristianRau No, it's not, but still worth as a cross check AND I put Wolfram links there to make the people aware of it, because I think it's a great online tool. I just try to help people to help themselves... If you think, you have to downvote that, you're welcome.2012-02-01