18
$\begingroup$

I need tricks or shortcuts to find the inverse of $2 \times 2$ and $3 \times 3$ matrices. I have to take a time-based exam, in which I have to find the inverse of square matrices.

  • 38
    Tangentially: how sad that people are being timed computing matrix inverses.... I can think of very few less useful abilities than being able to compute the inverse of a $3\times3$ matrix fast!2011-02-11
  • 2
    Will it be a multiple choice test? In that case you can just multiply by the provided answers and look for which one works.2011-02-11
  • 2
    @Ross: ssh! That's so much faster and easier as to defeat the intended purpose of the question. (Compare: what is the factorization of $N$? Here, let me give you some choices...) :)2011-02-11
  • 2
    @Pete: I agree, but that is often what is asked on tests. And one should take advantage of the situation.2011-02-11
  • 2
    @Ross: Sure, I was not being entirely serious. If my warning was to anyone, it was to those (hopeless souls?) who write multiple choice linear algebra tests.2011-02-11
  • 0
    @Ross What should I get after multiplying , Identity matrix. I am one among those hopeless souls :(2011-02-12
  • 0
    Yes, you should get the identity. It has 1 along the main diagonal (upper left to bottom right) and 0 elsewhere. Do you know why this is the identity?2011-02-12
  • 0
    @Ross No , I don't know Why its called identity, maybe because all diagonal elements have 1.2011-02-12
  • 0
    Because if you multiply it by another 3x3 matrix, you get that matrix back again, just like multiplying a single number by 12011-02-12
  • 0
    @Ross Thanks for info :)2011-02-13

3 Answers 3

20

For a 2x2 matrix, the inverse is: $$ \left(\begin{array}{cc} a&b\\ c&d \end{array}\right)^{-1} = {1 \over a d - b c} \left(\begin{array}{rr} d&-b\\ -c&a \end{array}\right)~,~~\text{ where } ad-bc \ne 0. $$

just swap the 'a' and 'd', negate the 'b' and 'c', then divide all by the determinant $a d - b c$.

That's really the most straightforward 'trick', just memorize that pattern.

For 3x3, it's lot more complicated but there is a pattern. As usual compute the determinant first (kind of a pain; but surely you already know the pattern to compute that quickly).

$$ \left(\begin{array}{ccc} a&b&c\\ d&e&f\\ g&h&i \end{array}\right)^{-1} = {1 \over {\rm{det}}} \left(\begin{array}{rrr} e i - f h&-(b i - c h)&b f - c e\\ -(d i - f g)&a i - c g&-(a f -c d)\\ d h - e g&-(a h - b g)&a e - b d \end{array}\right). $$ The pattern is that each entry is

  • the determinant of the submatrix gotten by removing that row and column. I.e. for row 2 column 3 (from $f$'s position), the determinant is $a h - b g$: $$ \det\left(\begin{array}{cc} a&b\\ g&h \end{array}\right) = a h - b g $$

  • then multiply in the checkerboard pattern. (i.e.1x1 is positive, 1x2 is negative... mathematically it's multiply by $(-1)^r(-1)^c$.

  • Then transpose.

See? There's a pattern, but I feel it's about the same symbolic complexity as just doing it brute force Gaussian-elimination style.

  • 0
    Is there any formula for 3x3 , like the one above2011-02-11
  • 0
    Yes...but...it involves the determinant of the 3x3 and all the 2x2 submatrices. I thought that that isn't much of a trick or shortcut; it seems about the same complexity as just plodding through row/column operations to convert the 3x3 into an identity matrix and applying those operations to an identity matrix at the same time. Of course, if there's an expectation that the determinant is 1, then maybe it's appropriate. Also, be warned that the row/column operations are 'meaningful' (you see that they are computing the inverse) but the 'trick' is just blind application of a formula.2011-02-11
  • 0
    Yes there is but is not as nice. There is a formula like this for square matrices of any size. If $A$ is your matrix then the $ij$ entry of $A^{-1}$ is $(-1)^{i+j}/\det(A) \cdot \det(minor(A(j,i))$ where $minor(A(j,i))$ is my nonstandard and poor notation for the $(n-1)\times (n-1)$ minor of $A$ obtained by removing the $j$th row and the $i$th column. In the $3\times 3$ case this says the entry in the first column and first row of $A^{-1}$ is $+1/\det(A)\cdot (a_{22}a_{33} - a_{23}a_{32})$. For sparse matrices Gauss-Jordan is ok. If all entires of A are nonzero this formula is faster.2011-02-11
  • 0
    @Mitch The above mentioned method, does it work for all non - singular 2x 2 matrices ?2011-02-11
  • 0
    @abcdefghijklmnopqrstuvwxyz: Well, sorta. if it's nonsingular, the determinant is 0, and so the method will work in that it will also fail when the inverse of a matrix will fail (when it is non-singular).2011-02-11
  • 0
    Assuming the exam is not purely an exercise in lightning fast computation of determinants -- i.e., assuming the OP has other things to learn for it -- I would counsel against rote memorization on the scale of the formula given above for the inverse of a 3x3 matrix. A lot of people (including me) would be in danger of misremembering the formula, and how can any partial credit be given if you screw up the formula you have blindly memorized? Row reduction (of the augmented matrix [A|I_n]), as suggested below, seems fast enough and is a much more broadly useful skill.2011-02-11
  • 0
    @Pete: I totally agree, therefore my (weak) warnings. But I think it is a useful thing to lay out explicitly anyway. 'Lightning' calculations on multiplication (In decimal, what's n5 times n5? It's n*(n+1) followed by 25. 65*65 = 4225). It's not deep math, if you mess up 1 operation the whole thing could fall apart, and there's no meaning in the trick (figuring out the trick has some meaning), but it -is- useful.2011-02-11
  • 0
    In my previous comment, I meant to say "inverses" instead of "determinants".2011-02-11
  • 0
    @Mitch: sure, it's useful. It depends upon how much space you have in your brain (and also how much you enjoy such tricks). I certainly know the trick about squaring numbers ending in $5$ that you mention, as probably does almost every math person. I also know the fast test for divisibility by $7$, although some number theorists do not. When I was young I regarded it as a challenge to work out other divisibility tests, so I did things like $13$ and $17$...but I no longer have space in my brain for them. (And the computers all around me are much better at this sort of thing than I...)2011-02-11
  • 0
    @Pete: yes, I'm there with you. But in some sense, isn't all mathematics about shortcuts? Thinking real hard (finding a proof), so that you don't have to think (apply the theorem, without worrying about the details)?2011-02-11
11

Your best bet is the Gauss-Jordan method.

  • 0
    any other method ?2011-02-11
  • 2
    @SpongeBob: By the way, Gauss-Jordan elimination is *much* faster than the Laplace expansion (the method Mitch mentioned). It also skips over having to find the determinant, and it works for any square matrix.2011-04-17
5

So we want to find out a way to compute $2 \times 2 ~\text{ or }~ 3 \times 3$ matrix systems the most efficient way. Well I think the route that we want to go would be to use Cramer's Rule for the $2 \times 2 \text{ or } 3 \times 3$ case. To state the $2 \times 2$ case we will use the following:

For some coefficient matrix A= $\left[ \begin{array}{rr} a & b \\ c & d \end{array} \right]$

$A^{-1}=\dfrac{1}{ad-bc} \cdot \left[ \begin{array}{rr} d & -b \\ -c & a \end{array} \right]~ \iff ad-bc \ne 0$ $~~~~~~~~~\Big($i.e., Det(A)$~\ne ~ 0\Big)$

For the $3 \times 3$ case, we will denote that as the folllowing:

$x_{1} = \dfrac{|b~~x_{2}~~x_{3}|}{|\bf{A}|}~~,$

$x_{2} = \dfrac{|x_{1}~~b~~x_{3}|}{|\bf{A}|}~~,$

$x_{3} = \dfrac{|x_{1}~~x_{2}~~b|}{|\bf{A}|}.$

This comes from the matrix equation: ${\bf{A\vec{x}}}={\bf{\vec{b}}},~~~$ where $\vec{x}=[x_{1}~~x_{2}~~x_{3}]^{T}$.

For the elements of matrix $A = \left|\begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right|,~~$ it can be extended for the solutions $x_{1},~x_{2},~x_{3}$

as so knowing that ${\bf|{A}| =} ~ |a_{ij}| ~ \not= ~ 0.$

$x_{1} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33} \end{array} \right|$,

$x_{2} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} a_{11} & b_1 & a_{13} \\ a_{21} & b_2 & a_{23} \\ a_{31} & b_3 & a_{33} \end{array} \right|$,

$x_{3} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_3 \end{array} \right|$.

An alternate way of doing this would be using row reducing methods, known as either Gaussian Elimination( ref ) or Gauss-Jordan Elimination( rref ).

I hope this helped out. Let me know if there if anything you do not understand.

Thanks.

Good Luck.