1
$\begingroup$

Proposition

Let $A\in\mathfrak{M}_{(m+n)\times(m+n)}(\mathbb{K})$, $B\in\mathfrak{M}_{n}(\mathbb{K})$, $D\in\mathfrak{M}_{m}(\mathbb{K})$. If $A=\left(\begin{array}{cc}B& C\\0 & D \end{array}\right)$ and $B$ is invertible, then $\det A = \det B \cdot \det D$.

My proof Since $B$ is invertible, applying row transformations $ \underbrace{\left(\begin{array}{cc}E_k & 0 \\ 0 & I\end{array}\right) \cdots \left(\begin{array}{cc}E_1 & 0 \\ 0 & I\end{array}\right)}_{Y} \left(\begin{array}{cc}B & C \\ 0 & D\end{array}\right) = \left(\begin{array}{cc}H_B & C' \\ 0 & D\end{array}\right) = \left(\begin{array}{cc}I_n & C' \\ 0 & D\end{array}\right)$ where $H_B$ is the Hermitian form of $B$., and $\det Y = \det(E_k\cdots E_1) = \det B^{-1} = (\det B)^{-1} $.

Now, applying row transformations again: $ \underbrace{\left(\begin{array}{cc}F_l & 0 \\ 0 & I\end{array}\right) \cdots \left(\begin{array}{cc}F_1 & 0 \\ 0 & I\end{array}\right)}_{Z} \left(\begin{array}{cc}H_B & C' \\ 0 & D\end{array}\right) = \underbrace{\left(\begin{array}{cc}H_B & C' \\ 0 & H_D\end{array}\right)}_{W} $ let suppose that that $D$ is invertible, then $H_D = I_m$ and, since $W$ is triangular, $\det W = 1$, $\det Z = \det(F_l\cdots F_1) = (\det D)^{-1}$ and $\det A = \det B \cdot \det D$.

If $D$ is not invertible, $\det D = 0$, $\det A = 0$ since there exist null rows and $\det A = \det B \cdot \det D$.

Is there anything incomplete/wrong? Is there a faster/more elegant proof? Thanks in advance.

1 Answers 1

1

I really dislike doing anything with elementary matrices, since I tend to find them quite messy. I prefer to prove statements like this by using the definition of determinant, as I will do below. This also works even if B is not invertible, and there's just something rather nice about it!

If we have an $n\times n$ square matrix $M$ with $M_{ij} = m_{ij}$ then $\det M = \displaystyle \sum_{\sigma \in s_n} \epsilon(\sigma)\prod_{i=1}^n m_{i \sigma(i)}$

where $\epsilon(\sigma)$ is the sign of $\sigma$.

So now we have $\det A = \displaystyle \sum_{\sigma \in s_{n+m}} \epsilon(\sigma)\prod_{i=1}^{n+m} a_{i \sigma(i)}$

But clearly if $\sigma(i) >n$ for any $i \leq n$ then $\displaystyle \prod_{i=1}^{n+m} a_{i \sigma(i)} = 0$

This means that the only $\sigma \in s_{n+m}$ that contribute to the determinant are those that have $\sigma(i) \leq n$ for $i \leq n$ and $\sigma(i) > n$ for $i > n$. (This essentially means that we can "split up" each $\sigma$ into terms two elements, $\tau \in S_n$ and $\rho \in S_m$). With a little thought we can see that

$\det A = \displaystyle \sum_{\tau \in s_{n}}\sum_{\rho \in s_m}\epsilon(\tau) \epsilon(\rho) \prod_{i=1}^{n} a_{i \tau(i)} \prod_{i=n+1}^{n+m}a_{i\rho(i)}$

(where we understand that $\rho(n+i) = n + \rho(i)$), so we get

$\det A = \displaystyle \sum_{\tau \in s_{n}} \epsilon(\tau) \prod_{i=1}^{n} b_{i \tau(i)}\left( \sum_{\rho \in s_m}\epsilon(\rho)\prod_{i=1}^{m}d_{i\rho(i)}\right)$

$\det A = \det B\det D$

It is a little hard to write it out completely rigorously, and I think that writing out explicitly what we are doing when splitting $S_{n+m}$ into $S_n$ and $S_m$ is both unnecessary and detracts from the elegance of the proof. I hope seeing this encourages you to think in terms of the definition of determinants!