1
$\begingroup$

Suppose you are given this $2 \times 2$ matrix of trig functions:

\begin{vmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{vmatrix}

The zeros of which give the identity matrix:

\begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix}

Noticing the original matrix is an example of a wronskian, extending it to the $3 \times 3$ case:

\begin{vmatrix} \cos\theta & \sin\theta & -\cos\theta \\ -\sin\theta & \cos\theta & \sin\theta \\ -\cos\theta & -\sin\theta & \cos\theta \end{vmatrix}

Evaluating at zero yields:

\begin{vmatrix} 1 & 0 & -1 \\ 0 & 1 & 0 \\ -1 & 0 & 1 \end{vmatrix}

which is symmetric about the diagonal, leading me to believe that all the odd powers have determinants equal to zero. In the next even case e.g. $4 \times 4$:

\begin{vmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ -1 & 0 & 1 & 0 \\ 0 & -1 & 0 & 1 \end{vmatrix}

which is also symmetric about the diagonal, leading me to believe the even powers greater than two equal zero.

Originally when I attempted the case where n equals five, I wrote the determinant as the sum of products in terms of trig functions without evaluating at zero. I noticed that they grow as a function of n at the same rate as the elements of the symmetric group.

So, I have two questions:

  1. was I right? is the determinant equal to 0 for n greater than 2 based on the symmetry of the matrix about the diagonal?

  2. since there is an isomorphism between the sum of products representation of the determinant and $S_n$, is there a group theoretic proof to be had?

  • 0
    for the purposes of this question, the elements of the matrix are cosine, whose derivative is -sine, whose derivative again is -cosine, whose derivative is sine, whose derivative is cosine again. This forms a ring. The "anti-derivative" is equivalent to incrementing to the next element on this ring, but in the opposite direction of the derivative. Now, is THAT clearer?2011-12-23

1 Answers 1

4

There's a simpler explanation to be had. Remember that the determinant of a matrix will be zero whenever the rows or columns are linearly dependent. When you iterate by your scheme, you end up with rows that are scalar multiples of each other, which explains why the determinant vanishes.

I should also point out that symmetric matrices need not have determinant zero; you may be misremembering the fact that $\textit{anti}$-symmetric $n \times n$ matrices with $n$ odd (i.e. those $A$ with $A^t = -A$) have zero determinant.

  • 0
    ...this explains why it takes n=3 to get a zero determinant. if you consider the plot of e.g. cosine vs. amplitude, the 2nd derivative shifts the function by pi, inverting the original. This is equivalent to making the n-2 row the opposite sign of the nth row!2011-12-24