3
$\begingroup$

This is Exercise 7, page 21, from Hoffman and Kunze's book.

Let $A$ and $B$ be $2\times 2$ matrices such that $AB=I$. Prove that $BA=I.$

I wrote $BA=C$ and I tried to prove that $C=I$, but I got stuck on that. I am supposed to use only elementary matrices to solve this question.

I know that there is this question, but in those answers they use more than I am allowed to use here.

I would appreciate your help.

  • 0
    I don't know what you have learned so far. If you learned "Rank-nullity theorem", then it can be done by that. Consider the nullity of $B$, it is 0. So $B$ has the full rank. Then use $(BA-I)B=0$.2013-05-15

6 Answers 6

2

I will give a sketch of a proof. Let $A= \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) $ and $B= \left( \begin{array}{cc} x & x \\ z & w \end{array} \right) $ such that $AB=I.$ Then we get $\left\{\begin{array}{c} ax + bz = 1 \\ cx + dz = 0 \\ \end{array}\right.$ and $\left\{\begin{array}{c} ay + bw = 1 \\ cy + dw = 0 \\ \end{array}\right.$

I will assume that $a\neq 0$ (since there is no $B$ such that BO=I.) Then we have $x=\frac{1}{a}-\frac{bz}{a}$ and we get $(ad-bc)z=-c$. Let suppose that $ad=bc$. If $b=0$ or $c=0$ then $d=0$ and we would have $A= \left( \begin{array}{cc} a & b \\ 0 & 0 \end{array} \right)$, or $A= \left( \begin{array}{cc} a & 0 \\ c & 0 \end{array} \right)$, or $A= \left( \begin{array}{cc} a & 0 \\ 0 & 0 \end{array} \right)$ but in any case there is no $B$ such that $BA=I$ (It is easy to prove that). So we have $(a,b,c,d)\neq (0,0,0,0)$. Then we have $a=\frac{bc}{d}$, but in this case the systems above do not have solution. Then $ad-bc\neq 0$ and we get $z=\frac{-c}{ad-bc}$. In the end we will find that $B= \frac{1}{ad-bc}\left( \begin{array}{cc} d & -b \\ -c & a \end{array} \right).$ It is easy to check that $BA=I.$ Now if $a=0$ then we have $b\neq0$ and $\dots$

I don't know how to solve the exercise in a different way. This is my best effort.

2

I know this is old, but I think I have found the answer that was intended. I also struggled with this one for a while because, as spohreis mentioned, you don't have much to go on at the time this is asked (no determinants, no transposes, no inverses even).

That being said, in problem 3 of section 1.4 you prove that all $2\times 2$ row-reduced echelon matrices are of the following form:

$ \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right]\quad,\quad \left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right]\quad,\quad \left[ \begin{array}{cc} 1 & c \\ 0 & 0 \end{array} \right]\quad,\quad \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] \,.$

Now assume that $A$ and $B$ are $2 \times 2$ matrices such that $AB=I$. By theorem 5 (pg 12) we have that $B$ is row-equivalent to a row-reduced echelon matrix $R$, and by the corollary to theorem 9 (pg 20) this implies that $B=PR$ where $P$ is a product of elementary matrices. Similarly we have that $A=QT$ (where $Q$ is a product of elementary matrices and $T$ is in row-reduced echelon form).

Now we have that $AB=I \implies QTPR=I$, but now clearly $T=I$ because if the bottom row of $T$ were all zeros then the bottom row of $TPR$ would be zero, and this implies that the product $QTPR$ would have the form $\left[ \begin{array}{cc} aQ_{11} & bQ_{11} \\ aQ_{21} & bQ_{21} \end{array} \right]$ for some $a,b\in F$ and thus clearly could not be $I$. A similar argument shows that $R=I$. Thus $A$ and $B$ are both actually products of elementary matrices. By theorem 2 (pg 7) each elementary row operation has an inverse and using theorem 9 (pg 20) each elementary matrix therefore has an inverse, now we can write

$\begin{align} AB=QP=E_{q_1}E_{q_2} \cdots E_{q_t}E_{p_1} \cdots E_{p_s}&=I\\ E_{q_1}^{-1}E_{q_1}E_{q_2} \cdots E_{q_t}E_{p_1} \cdots E_{p_s}E_{q_1}&=E_{q_1}^{-1}E_{q_1}=I\\ &\vdots\\ E_{p_1} \cdots E_{p_s}E_{q_1} \cdots E_{q_t}&=I\\ PQ&=I\\ BA&=I\,. \end{align}$

(Note that at the end here, although I chose to use the standard inverse notation, it really is enough that such a matrix exists, which follows from theorem 2 and theorem 9 alone - no need to really "know" about inverse matrices yet. You could, if you so chose, just use theorem 9 to rewrite $QP=I$ in terms of elementary row operations, and then just use theorem 2 directly without ever mentioning inverse matrices.)

0

$AB= I$, $Det(AB) = Det (A) . Det(B) = 1$. Hence $Det(B)\neq 0$ Hence $B$ is invertible.

Now let $BA= C$ then we have $BAB= CB$ which gives $B= CB$ that is $B. B^{-1} = C$ this gives $ C= I$

0

Here we explain how to derive entries of $B$ from the entries of $A$. As the book explains in the beginning of page 19 (section 1.5), if we let $B_1$ and $B_2$ denote the first and second columns of the matrix $B$ then one can write $AB = [AB_1,AB_2]$. Hence AB = I if and only if $AB_1=\left[\begin{array}{c} 1\\0\\\end{array}\right]$ and $AB_2=\left[\begin{array}{c} 0\\1\\\end{array}\right]$. So we know that the two systems of two linear equations in two unknowns $AX=\left[\begin{array}{c} 1\\0\\\end{array}\right]$ and $AY=\left[\begin{array}{c} 0\\1\\\end{array}\right]$ are both having solutions. We would like to prove that the only solutions are $B_1$ and $B_2$, and we would like to find these solutions in terms of entries of $A$.

If we let $A= \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]$ then the two equations can be written as $\left[ \begin{array}{ccc} a & b & 1\\ c & d & 0 \end{array} \right]$ and $\left[ \begin{array}{ccc} a & b & 0\\ c & d & 1 \end{array} \right]$. Since both these equations have solutions ($B_1$ and $B_2$), then the equations $\left[\begin{array}{ccc} ad-bc & 0 & d\\ 0 & ad-bc & -c \end{array}\right]$ and $\left[ \begin{array}{ccc} ad-bc & 0 & -b\\ 0 & ad-bc & a \end{array} \right]$, which are linear combinations of the original equations, are also having solutions (section 1.2 of the book). Now based on explanation in page 14, which explains the conditions for which non-homogenous systems of equations have solutions, if $ad-bc=0$ then $a=b=c=d=0$. This is a contradiction since if $A=0$, then $AB=0B=0\neq I$. So we can assume that $ad-bc \neq 0$. This immediately proves that the two equations are having unique solutions, therefore $B$ can only be of the following form $\left[ \begin{array}{cc} \frac{d}{ad-bc} & \frac{-b}{ad-bc} \\ \frac{-c}{ad-bc} & \frac{a}{ad-bc} \end{array} \right].$

Now using this form for $B$, one can verify that $BA = I$.

-1

$AB = I$ implies that $ABAB = I$ and $AABB = A(I)B = AB = I$, hence $ABAB = AABB$. Since $\det(AB) = \det(A)\det(B) = 1$, the determinant of $A$ and the determinant of $B$ are units, so that $A$ and $B$ have inverses (using the adjoint matrix thing), hence $AABB = ABAB$ implies $BA=AB = I$.

Hope that helps,

Note : This proof works when $\mathbb F$ is an arbitrary commutative ring with unity. You didn't specify what $\mathbb F$ was so that I'm stating in which generality this proof holds.

  • 0
    Thanks for the comment. You should learn though to comment *before* downvoting and explain why you're downvoting. If you don't it just gets people angry and your downvote is pointless.2012-02-23