2
$\begingroup$

Edit: I'm genuinely not sure why this has gotten little activity. If somebody knows, please tell me, so I can rework it.


As a note: I am a purist, and really want to see a proof of this, but I've also got course obligations to fulfill, and so not much time. I understand that the answer involves group theory, where my backing is weak, so more complete answers would really help me.

Let $A$ be an $n\times n$ matrix of real numbers. Define the $(i, j)$th minor $M_{(i, j)}$ of $A$ to be the determinant of the $(n-1) \times (n-1)$ matrix obtained by removal of the $i$th row and $j$th column from $A$ - e.g.,

$A = \begin{bmatrix}1 & 4 & 2 \\9 & 4 & 6 \\8 & 3 & 1 \end{bmatrix} \implies M_{(1, 1)} = \begin{vmatrix}4 & 6 \\3 & 1 \end{vmatrix}, \text{ }M_{(2,1)} = \begin{vmatrix} 4 & 2 \\ 3 & 1 \end{vmatrix}, \mathrm{etc.}$

Define the $(i,j)$th cofactor $C_{(i, j)}$ by $C_{(i, j)} = A_{(i,j)}(-1)^{i + j}M_{(i,j)}$ - i.e., multiply $A_{(i,j)}M_{(i,j)}$ by a sign corresponding to the $(i,j)$th entry of the matrix

$\mathrm{sgn} = \begin{bmatrix} + & - & + & \cdots \\- & + & - & \cdots \\ + & - & + &\cdots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix};$

thus,

$A = \begin{bmatrix}1 & 4 & 2 \\9 & 4 & 6 \\8 & 3 & 1 \end{bmatrix} \implies C_{(1, 1)} = 1 \times \begin{vmatrix}4 & 6 \\3 & 1 \end{vmatrix}, \text{ }C_{(2,1)} = -9 \times \begin{vmatrix} 4 & 2 \\ 3 & 1 \end{vmatrix}.$

Finally, define the cofactor expansion $D_i(A)$ across the (i)th row of $A$ by $D_i(A) = \sum_{k = 1}^n C_{(i, k)}$. For example, we get

\begin{eqnarray} D_1 \Bigg( \begin{bmatrix}1 & 4 & 2 \\9 & 4 & 6 \\8 & 3 & 1 \end{bmatrix} \Bigg) = C_{(1, 1)} + C_{(1, 2)} + C_{(1, 3)} = \begin{vmatrix}4 & 6 \\3 & 1 \end{vmatrix} - 4 \begin{vmatrix} 9 & 6 \\ 8 & 1 \end{vmatrix} + 2\begin{vmatrix}9 & 4 \\8 & 3 \end{vmatrix} = 132 \\ D_2 \Bigg( \begin{bmatrix}1 & 4 & 2 \\9 & 4 & 6 \\8 & 3 & 1 \end{bmatrix} \Bigg) = C_{(2, 1)} + C_{(2, 2)} + C_{(2, 3)} = -9 \begin{vmatrix}4 & 2 \\3 & 1 \end{vmatrix} + 4 \begin{vmatrix} 1 & 2 \\ 8 & 1 \end{vmatrix} - 6\begin{vmatrix}1 & 4 \\8 & 3 \end{vmatrix} = 132 \\ D_3 \Bigg( \begin{bmatrix}1 & 4 & 2 \\9 & 4 & 6 \\8 & 3 & 1 \end{bmatrix} \Bigg) = C_{(3, 1)} + C_{(3, 2)} + C_{(3, 3)} = 8 \begin{vmatrix}4 & 2 \\4 & 6 \end{vmatrix} - 3 \begin{vmatrix} 1 & 2 \\ 9 & 6 \end{vmatrix} +\begin{vmatrix}1 & 4 \\9 & 4 \end{vmatrix} = 132. \end{eqnarray}

The equality of each of these expansions serves as an example of the theorem I want to see proven:

Theorem: For any row $\rho_i$ (or column, with a similar definition to that for rows), the cofactor expansion $D_i(A)$ of a given matrix $A$ has the same numerical value.


Some notes:

(1) (Yes, I do realize that all the $\TeX$ belies my claim to not have much time. What I mean is that I could use some instruction - as a way of saving quite a bit of valuable time - since I've never taken a course in algebra before.)

(2) I admit that I don't really understand the Leibniz formula, so I don't actually have an immediate definition of determinant to fall back on inductively. One way around is to prove the cofactor theorem inductively on the size of $n$. Since every property of the determinant follows (easily) from the cofactor theorem, the above theorem is all I need to have proof of at the moment.

  • 0
    @anon Yeah, let me be clear: I don't understand the Leibniz formula. I'm looking at permutation parity right now.2012-12-05

1 Answers 1

2

Perhaps a geometrical view can help:

Consider a matrix $A$ with column vectors $a_1,..,a_n\in\Bbb R^n$. Then $\det A$ is just the signed $n$ dimensional volume of the parallelepiped spanned by the vectors $a_i$, so that the identity matrix $I$ determines the unit cube of $n$ dimension, and it has signed volume $+1$, and any reflection through a hyperpane reverts the sign.

Now, using the Gaussian elimination procedure, one can prove that it coincides with the Leibniz formula of sums of signed products for all permutation of indices of determinant:

  1. The signed volume and the Leibniz formula are both multilinear (linear in each variable when the rest is fixed) as $(\Bbb R^n)^n\to\Bbb R$, and they agree on $I$, and (by multilinearity) they also agree when one column is $0$. To see it for the signed volume, fix $a_2,..,a_n$ and the hyperplane spanned by them, then the volume will depend only on the height of $a_1$ counting from this hyperplane, but this height is easily seen to be linear.
  2. Any given $A\in\Bbb R^{n\times n}$ matrix can either be transformed to the identity matrix (if has full rank) or to a matrix with a $0$ column, using Gaussian elimination: consisting of possible steps changing the matrix I.) $a_k':=a_k+\lambda a_j$, II.) $\ a_k':=\vartheta\cdot a_k$ and III.) exchange ($a_k':= a_j,\ a_j':=a_k$) where $\vartheta\ne 0$ and $j\ne k$.
  3. Each possible step produces the same effect for both the signed volume and the Leibniz formula, namely I.) doesn't change, II.) getting multiplied by $\vartheta$, III.) changes sign. For I.) consider the $n=2$ case, a parallelogramma spanned by $a_1$ and $a_2$, then its height wrt. $a_2$ will stay the same if $a_1$ is moved along the parallel line, ie. $a_1$ is replaced by $a_1+\lambda a_2$. For III.), try to express it by two usage of I.)

Now turning to the question, using multilinearity for the second column: $\left|\matrix{1&4&2\\9&4&6\\8&3&1}\right| = \left|\matrix{1&4&2\\9&0&6\\8&0&1}\right|+\left|\matrix{1&0&2\\9&4&6\\8&0&1}\right|+\left|\matrix{1&0&2\\9&0&6\\8&3&1}\right| $ By step I.) we get the following: $= \left|\matrix{0&4&0\\9&0&6\\8&0&1}\right|+\left|\matrix{1&0&2\\0&4&0\\8&0&1}\right|+\left|\matrix{1&0&2\\9&0&6\\0&3&0}\right|$ which is basically just the expansion formula, and could also be read as the coordinatizing of $a_2$ vector as $a_2=4e_1+4e_2+3e_3$, then considering the parallelepids over the parallelogramma $a_1,a_3$, then calculating separately the volumes.