3
$\begingroup$

Suppose $F$ is a field where $1 \neq -1$ and $V$ is a $2n$ dimensional $F$-vector space. Also suppose that $M,N$ are involutions, i.e. $M^2 = I$ and $N^2 = I$, and that $M$ and $N$ anti-commute, i.e. $MN = -NM$.

I would like to show that $$ M = \left[\begin{array}{cc}A & 0\\ 0 & -A \end{array} \right],~~ N =\left[\begin{array}{cc}0 & B\\ B & 0 \end{array} \right] $$ (these are given in block matrix notation, so that $A,B$ are matrices, not scalars).

I found this website which makes the makes the same claim ($M,N$ involutions implies they are invertible, the website handles a slightly more general case) but I am not able to follow the argument.

First off, could anyone verify that this is true for an arbitrary $F$-vector space as described? Also some help with the proof would be much appreciated, thank you.

  • 0
    Are you sure about this result? Are you not missing "similar to". Take $M=\mathrm{diag}(1,-1,1,-1)$ and $N=\begin{array} 70& 0 & 0 & 1\\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & \\ 1 & 0 & 0 & 0 \end{array}$.2012-02-07
  • 0
    I guess the result is not true then. So what do the solutions look like then?2012-02-07

1 Answers 1

4

The right statement is that the matrices are similar to $$\begin{pmatrix} \mathrm{Id}_n & 0 \\ 0 & - \mathrm{Id}_n \end{pmatrix} \quad \begin{pmatrix} 0 & \mathrm{Id}_n \\ \mathrm{Id}_n & 0 \end{pmatrix}.$$ Similar to means that we can choose a basis of $V$ so that the matrices are of this form. In coordinates, your matrices look like $$S \begin{pmatrix} \mathrm{Id}_n & 0 \\ 0 & - \mathrm{Id}_n \end{pmatrix} S^{-1} \quad S \begin{pmatrix} 0 & \mathrm{Id}_n \\ \mathrm{Id}_n & 0 \end{pmatrix} S^{-1}$$ for some invertible $S$. The subscript $n$ means that I am talking about the $n \times n$ identity matrix.


This looks like homework, so I'd rather not give a full solution.

Since $M^2 = 1$, the matrix $M$ is diagonalizable with eigenvalues $1$ and $-1$. So we can choose a basis where $$M = \begin{pmatrix} \mathrm{Id}_k & 0 \\ 0 & - \mathrm{Id}_{2n-k} \end{pmatrix}.$$

Write $N$ in block form as $\left( \begin{smallmatrix} A & B \\ C & D \end{smallmatrix} \right)$.

Now what can you deduce from the equation $MN=-NM$? And, once you've used that, what can you deduce from the equation $N^2=\mathrm{Id}_{2n}$?

  • 0
    How can you conclude from $M^2 = 1$ that $M$ is diagonalizable? I see that if $M$ has an eigenvalue it must be $\pm 1$, but what if $M$ doesn't? If I knew I could diagonalize I would have solved this problem long ago :(2012-02-07
  • 0
    Do you know the theorem that, if the minimal polynomial of $M$ has all its roots in $K$, and has no double roots, then $M$ is diagonalizable over $K$?2012-02-07
  • 0
    If not, hint: Induction on $\dim M$. We have $(M-\mathrm{Id}) (M+\mathrm{Id})=0$, so at least one of these factors has a kernel. Then what?2012-02-07
  • 0
    Minimal polynomial and characteristic polynomial aren't the same by any chance are they? From context I would guess that in this case $M^2-1$ is the minimal polynomial. Also, what is meant by $\dim M$? I have heard of $\dim \ker M$ and $\dim \operatorname{im} M$, but not of $\dim M$.2012-02-07
  • 0
    Oh, sorry, just the size of the matrix $M$. In general, minimal polynomial and characteristic polynomial are not the same. For example, if $M$ is the $n \times n$ identity matrix, then its minimal polynomial is $x-1$ and the characteristic polynomial is $(x-1)^n$. There are a lot of ways to show that (over a field of characteristic not $2$), a matrix satisfying $M^2=\mathrm{Id}$ is diagonalizable, and I didn't mean to focus on one particular one. If you are near the beginning of a rigorous linear algebra course, then this is a good challenge for you.2012-02-07
  • 0
    Going back to your first hint: Induction. If $n=1$ then $M$ is diagonal. If $n>1$ then $(M-I)(M+I)=0$ implies one factor has a kernel, which gives an eigenvector. Now consider $M$ restricted to the space without the span of that eigenvector (except 0). The restricted operator still satisfies $(M-I)(M+I)=0$ and has dimension $n-1$, so we may choose $n-1$ eigenvectors of it by hypothesis, and they must be independent because the restriction ignores the span of the first. -- I feel like I did this wrong because I don't see how it will fail if the roots aren't simple. Any more hints?2012-02-07
  • 0
    In general, $M$ restricted to a complementary $n-1$ dimensional space need not map that space to itself, so the induction breaks.2012-02-07
  • 0
    Okay I've made a lot of progress, but there is one final kink. I have gotten the correct form of $M$, and I have shown $N=S\left(\begin{array}{cc}0&B\\B^{-1}&0\end{array}\right)S^{-1},$ but I don't see how I can force $B=I_n$. Do you have any suggestions for this bit?2012-02-07
  • 0
    @David I was wondering if you would be able to clarify something for me. I am interested in this form for anti-commuting matrices, however I am wondering if this is the form or the more general one given on [this site.](http://goo.gl/YKhG2) That general one seems to have zero entry rows and columns, whereas this case both matrices are full rank. Is what you are saying that if we decompose one of the matrices into a form of S*Diag(I,-I)*S^{-1}, then in that same basis, the other one will always be as you have written there, will the identity matrices in the skew diagonal?2012-08-25