Problem. I am given a symmetric matrix \begin{equation} B_t \doteq \begin{bmatrix}2t & -t & -3t & 0 \\ -t & 2t & 2t & 0 \\ -3t & 2t & 10t & 1 \\ 0 & 0 & 1 & 1-t \end{bmatrix} \end{equation} and my task is to discuss its signature $\operatorname{sign}B_t=(s,k)$ as $t$ varies in the real numbers.
One method could be straight-away diagonalization, but ain't nobody got time. Another trick is the method of principal minors, which makes use of this proposition:
Proposition. Let $V$ be a real vector space. Let $q : V \to \mathbb R$ be a quadratic form associated with the symmetric matrix $A$. Call $A_i$ the $i$-th principal minor of $A$ (starting from the upper left or lower right corner ad libitum), and let $\alpha_i = \det A_i$, $\alpha_0 \doteq 1$. Finally, call $k$ the number of sign changes between each $\alpha_i$ and $\alpha_{i+1}$. Then, if $\alpha_i \neq 0$ for all $i$, the signature of $A$ is $$\operatorname{sign}(A) = (n-k,k) $$ where $n$ is the rank of the matrix.
My doubts concern how lengthy and error-prone this method can be in a parameter-dependent situation, and the existence of another criterion (besides the not-always-practical Babylonian method) that would be of use in determining the signature of a matrix when all else fails. I've gone through it anyway:
List the determinants. We compute the determinants of the principal minors of the matrix, starting from the top left, generating the following list of numbers: \begin{equation} \begin{split} \alpha_0(t) &\doteq \color{red}1 \\ \alpha_1(t) &= |2t| = 2t \\ \alpha_2(t) &= \begin{vmatrix} 2t & -t \\ -t & 2t \end{vmatrix} = 4t^2 - t^2 = \color{red}{3t^2} \\ \alpha_3(t) &= (10t)\alpha_2(t) -(2t)\begin{vmatrix} 2t & -t \\ -3t & 2t \end{vmatrix} + (-3t)\begin{vmatrix} -t & -3t \\ 2t & 2t \end{vmatrix} \\ &= (10t)(3t^2) - (2t)(4t^2-3t^2)+(-3t)(-2t^2+6t^2)=30t^3-2t^3-12t^3= \color{red}{16t^3} \\ \alpha_4(t) &=\det B_t = (1-t)\alpha_3(t)-(1)\begin{vmatrix}2t & -t & 0 \\ -t & 2t & 0 \\ -3t & 2t & 1\end{vmatrix} \\ &= (1-t)(16t^3)-(1)\left[(1)\alpha_2(t)-0+0\right]=16t^3-16t^4-3t^2=\color{red}{(-t^2)(16t^2-16t+3)} \end{split} \end{equation}
Discuss the sign of determinants. The method of principal minors consists in counting how many sign changes and how many sign permanences occur as we scroll through the list of determinants. Of course, in our example things are complicated by the fact that the determinants depend on $t$, so we need to discuss their sign beforehand and then we can evaluate the signature on a case-by-case basis.
- $\alpha_0(t)$ is positive $\forall t$.
- $\alpha_1(t)$ is negative when $t<0$, vanishes when $t=0$, and is positive when $t>0$.
- $\alpha_2(t)$ is positive when $t\neq 0$, and vanishes when $t=0$.
- $\alpha_3(t)$ is negative when $t<0$, vanishes when $t=0$, and is positive when $t>0$.
- $\alpha_4(t)$ is negative when $t<0$, vanishes when $t=0$, is negative when $0
0.75$.
Calculate the signature. (almost) Then I get:
- When $t < 0$, the signature is $(3,1)$.
- When $0
- When $0.25 < t < 0.75$, the signature is $(4,0)$.
- When $t > 0.75$, the signature is $(1,3)$.
Singular values of $t$. Here comes the problematic part: we need discuss the signature of the matrix at the values of $t$ that make at least one of the determinants vanish. This means the method of principal minors cannot be applied. In my example, when $t=0$ the matrix simplifies to $$B_0 = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \end{bmatrix} $$ and I can easily use the Babylonian method: if $q_t$ is the quadratic form associated with $B_t$,
$$ q_{0}(x) = x_4^2 + 2x_3x_4 = (x_4 + x_3)^2 - x_3^2 = - y_3^2 + y_4^2 $$
using the change of variables (change of basis) $x_3 \mapsto y_3$, $(x_3+x_4) \mapsto y_4$. So $B_0$ is congruent to the matrix
$$\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} $$
which means its signature is also $(1,1)$. This is consistent with the fact that the rank and nullity of $B_0$ are both $2$.
However, the matrices $B_{0.25}$ and $B_{0.75}$ are an utter mess. The Babylonian method is impractical, and my notes do not provide any other means of calculating the signature of a matrix. I have thought of applying the following lemma:
Lemma. Let $V$ be an $(n+1)$-dimensional real vector space and $q : V \to \mathbb R$ be the quadratic form associated with the symmetric matrix $A$, having signature $(s,k)$. Let $W$ be an $n$-dimensional subspace of $V$. Denote $q|_W$ the restriction of $q$ to $W$, $B$ the symmetric matrix associated with it, and $(\sigma,\kappa)$ its signature. Then
- The determinant of $A$ is zero (i.e. $q$ is degenerate) iff $s=\sigma$ and $k = \kappa$;
- $\det A \det B > 0$ iff $s =\sigma + 1$ and $k = \kappa$;
- $\det A \det B < 0$ iff $s =\sigma$ and $k = \kappa + 1$.
My intuition is that it could be of use, though I am not exactly sure how to use this information in practice.
The fact that there is no unique, a priori way of determining the signature of a matrix, and the fact that most systems can be very lengthy and error-prone, makes me wonder:
1. Have I been making wrong assumptions in the procedure above? Are there any calculation or conceptual mistakes?
2. Is there a faster and/or safer way to do any one of the steps in the usual methods?
3. Is there a miraculous lemma or theorem concerning matrix signature that could be of help, especially in a parameter-dependent situation?