11
$\begingroup$

While I was toying around with matrices, I chanced upon a family of tridiagonal matrices $M_n$ that take the following form: the superdiagonal entries are all $1$'s, the diagonal entries take the form $m_{j,j}=4 j (2 n + 3) - 8 j^2 - 6 n - 5$, and the subdiagonal entries take the form $m_{j+1,j}=4 j (2 j - 1) (2 n - 2 j + 1) (n - j)$.

For example, the $4\times 4$ member of this family looks like this:

$M_4=\begin{pmatrix} 7 & 1 & 0 & 0 \\ 84 & 27 & 1 & 0 \\ 0 & 240 & 31 & 1 \\ 0 & 0 & 180 & 19\end{pmatrix}$

I checked the eigenvalues of members of this family and I found that each member has the squares of the first few odd integers as eigenvalues. (For example, the eigenvalues of $M_4$ are $1,9,25,49$.) I couldn't find a way to prove this though.

I wish someone would help me! Thanks!

  • 2
    Not sure if this is relevant, but the diagonal terms of $M$ can be written as $m_{jj} = 2(2j - 3/2)[2(n-j)+3/2] - 1/2$ and $m_{j,j+1} = 4 j (n-j)(2j - 1)[2(n-j) + 1]$, which contains a lot of terms of the form $(Aj - B)(A(n-j) + B)$. So there's some structure to the madness `;-)`.2011-09-09

3 Answers 3

7

I think I have something. My solution's a bit convoluted, and I'd be glad to see a shorter path to do this.

Since symmetric matrices are "easier" to handle, we apply a symmetrizing diagonal similarity transformation $\mathbf D^{-1}\mathbf M\mathbf D$ to the matrix $\mathbf M$, where $\mathbf D$ has the diagonal elements

$d_{k,k}=\left(\prod_{j=1}^{k-1} \left(4j(2j-1)(2n-2j+1)(n-j)\right)\right)^\frac12$

(This transformation of an unsymmetric tridiagonal matrix to a symmetric one is due to Jim Wilkinson.) The new matrix, call it $\mathbf W$, has diagonal entries that are identical to that of $\mathbf M$, while the sub- and superdiagonal entries are the square roots of the subdiagonal entries of $\mathbf M$. For instance, here is $\mathbf W_4$:

$\begin{pmatrix} 7 & 2 \sqrt{21} & 0 & 0 \\ 2 \sqrt{21} & 27 & 4 \sqrt{15} & 0 \\ 0 & 4 \sqrt{15} & 31 & 6 \sqrt{5} \\ 0 & 0 & 6 \sqrt{5} & 19 \end{pmatrix}$

I've found that $\mathbf W$ is symmetric positive definite; it should thus have a Cholesky decomposition $\mathbf W=\mathbf C^\top\mathbf C$, where the Cholesky triangle $\mathbf C$ is an upper bidiagonal matrix. Luckily, the entries of $\mathbf C$ take a (somewhat) simple(r) form:

$\begin{align*}c_{k,k}&=\sqrt{(2k-1)(2n-2k+1)}\\c_{k,k+1}&=2\sqrt{k(n-k)}\end{align*}$

Here's $\mathbf C_4$ for instance:

$\begin{pmatrix} \sqrt{7} & 2 \sqrt{3} & 0 & 0 \\ 0 & \sqrt{15} & 4 & 0 \\ 0 & 0 & \sqrt{15} & 2 \sqrt{3} \\ 0 & 0 & 0 & \sqrt{7} \end{pmatrix}$

Why look at the Cholesky decomposition when it's the eigenvalues that we're interested in? It is sometimes expedient to compute the eigenvalues of a symmetric positive definite matrix by considering the singular values of its Cholesky triangle. More precisely, if $\sigma_1,\dots,\sigma_n$ are the singular values of $\mathbf C$, then the eigenvalues of $\mathbf W$ are $\sigma_1^2,\dots,\sigma_n^2$.

Here's where it clicked for me. On a hunch, I decided to consider the Golub-Kahan tridiagonals corresponding to $\mathbf C$.

It is part of the theory of the singular value decomposition that if $\mathbf C$ has the singular values $\sigma_1,\dots,\sigma_n$, then the $2n\times 2n$ block matrix

$\mathbf K=\left(\begin{array}{c|c}\mathbf 0&\mathbf C^\top \\\hline \mathbf C&\mathbf 0\end{array}\right)$

has the eigenvalues $\pm\sigma_1,\dots,\pm\sigma_n$. It is also known that there exists a permutation matrix $\mathbf P$, such that this matrix realizes a similarity transformation between $\mathbf K$ and a special tridiagonal matrix $\mathbf T$:

$\mathbf T=\mathbf P\mathbf K\mathbf P^\top$

and $\mathbf T$ looks like ($n=4$):

$\begin{pmatrix} 0 & \sqrt{7} & 0 & 0 & 0 & 0 & 0 & 0 \\ \sqrt{7} & 0 & 2 \sqrt{3} & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 \sqrt{3} & 0 & \sqrt{15} & 0 & 0 & 0 & 0 \\ 0 & 0 & \sqrt{15} & 0 & 4 & 0 & 0 & 0 \\ 0 & 0 & 0 & 4 & 0 & \sqrt{15} & 0 & 0 \\ 0 & 0 & 0 & 0 & \sqrt{15} & 0 & 2 \sqrt{3} & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 \sqrt{3} & 0 & \sqrt{7} \\ 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{7} & 0 \end{pmatrix}$

Note the structure of $\mathbf T$: the diagonal entries are zero, and the off-diagonal entries are the diagonal and superdiagonal entries of $\mathbf C$ "riffled" together. $\mathbf T$ is what is referred to as a Golub-Kahan tridiagonal matrix.

As it turns out, a further diagonal similarity transformation $\mathbf T^\prime=\mathbf F\mathbf T\mathbf F^{-1}$, with diagonal entries $f_{k,k}=\sqrt{\binom{2n-1}{k-1}}$ turns $\mathbf T$ to a rather famous set of matrices. Here's the $2n\times 2n$ matrix $\mathbf T^\prime$, for $n=4$:

$\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 7 & 0 & 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 6 & 0 & 3 & 0 & 0 & 0 & 0 \\ 0 & 0 & 5 & 0 & 4 & 0 & 0 & 0 \\ 0 & 0 & 0 & 4 & 0 & 5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 3 & 0 & 6 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 & 0 & 7 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}$

$\mathbf T^\prime$ is what is known as the Clement-Kac(-Sylvester) matrix. It is well-known (see here or here, for instance) that the $2n\times 2n$ Clement-Kac(-Sylvester) matrix has the eigenvalues $\pm1,\dots,\pm(2n-1)$ (and thus these are the eigenvalues of $\mathbf T$ and $\mathbf K$ as well). From this, we find that the singular values of $\mathbf C$ are $1,\dots,2n-1$ (the first few odd numbers), and thus the eigenvalues of $\mathbf W=\mathbf C^\top\mathbf C$ and the original matrix $\mathbf M$ are $1,9,\dots,(2n-1)^2$.

Whew!

  • 0
    I'm still hoping somebody can outdo my answer, though. :)2011-11-10
4

Here is the eigendecomposition $M_4= VDV^{-1}$ confirming @Craig's observation:

$\begin{pmatrix} 1 &1 &1 &1\\ 42 &18 &2 &-6\\ 840 &-120 &-120 &72\\ 5040 &-3600 &2160 &-720 \end{pmatrix} \begin{pmatrix}49\\&25\\&&9\\&&&1\end{pmatrix}\begin{pmatrix} 1 &1 &1 &1\\ 42 &18 &2 &-6\\ 840 &-120 &-120 &72\\ 5040 &-3600 &2160 &-720 \end{pmatrix}^{-1} $ If you can provide a little hint on how you came up with this, then we can probably understand why $V$ has this special structure. Also, you can distribute the squares to the left and right matrices and obtain an even weirder situation.

2

If I'm not mistaken, the largest eigenvector is of the form

$\begin{pmatrix} 1 \\ (2n-1)!/(2n-3)! \\ (2n-1)!/(2n-5)! \\ \vdots \\ (2n-1)!/1! \end{pmatrix}$.

This shouldn't be too difficult to prove, and I think the smallest eigenvector has a similarly compact form. My suspicion is that there is a simple rotation or pair of rotations you can do to see the eigenvalues explicitly. I would look for something of the form $ULDL^{-1}U^{-1}$ or $LUDU^{-1}L^{-1}$, where $L$ is non-zero except on the diagonal and subdiagonal, and $U$ is zero except on the diagonal and superdiagonal.