20
$\begingroup$

I was reviewing some matrices and found this interesting

if $r = \begin{pmatrix} 0&1\\ -1&0 \end{pmatrix}$ then $rr=-I$, also $\exp{(\theta r)} = \cos\theta I + \sin\theta r$ No wonder, the matrix $R(\theta) = e^{\theta r}$ is the 2d rotation matrix, just like $e^{i\theta}$ rotates a vector in the Argand plane. I have a very cursory knowledge of complex analysis, so I would like to know where I can find the details, i.e what is the unifying theme and in which literature can it be found.

  • 0
    Thanks a lot for the kind words! Re: homomorphism. This is simply a high-brow way of saying: $p(A) \cdot q(A) = (p \cdot g)(A)$ for any two polynomials $p,q$ and any matrix $A$ and for the constant polynomial $c(x) \equiv 1$ we have $c(A) = \mathbf{1}$.2011-07-14

5 Answers 5

18

joriki's answer is really nice and to the point as usual, but let me add my two cents and add the relation to the matrix representation of complex numbers.

First notice that multiplication by $i$ corresponds to a (counterclockwise) $90^{\circ}$-rotation around the origin in the complex plane. Now we can consider $\mathbb{C}$ as a $2$-dimensional vector space over $\mathbb{R}$ with basis $1,i$ so that $z = a + bi = \begin{pmatrix}a\\b\end{pmatrix}$. Let me denote the $\mathbb{R}$-linear map $z \mapsto iz$ by $\mathbf{J}$. Note that for $z = a + bi = \begin{pmatrix}a\\b\end{pmatrix}$ we have $iz = ia -b = -b + ia = \begin{pmatrix}-b\\a\end{pmatrix}$, so we must have $\mathbf{J} = \begin{pmatrix}0&-1\\1&0\end{pmatrix}$. You can of course also see this by remembering that a rotation around the angle $\alpha$ has the matrix $\begin{pmatrix}\cos{\alpha}&-\sin{\alpha}\\\sin{\alpha}&\cos{\alpha}\end{pmatrix}$.

So far so good, but this is only the beginning of the story! Now clearly we have $\mathbf{J}^2 = - \mathbf{1}$, $\mathbf{J}^3 = -\mathbf{J}$ and $\mathbf{J}^4 = \mathbf{1}$, so $\mathbf{J}$ satisfies very similar properties as the ones we're used to from $i$...

Given this, it is natural to try and look at matrices of the form $a\mathbf{1} + b\mathbf{J} = \begin{pmatrix}a&-b\\b&a\end{pmatrix}$ (the antisymmetric real $2\times2$-matrices).

Since we're working in a vector space of matrices, addition behaves in exactly the same way as usual, so let us look at multiplication. You should convince yourself that matrix multiplication gives $(a\mathbf{1} + b\mathbf{J})(c\mathbf{1}+d\mathbf{J}) = (ac-bd)\mathbf{1} + (ad+bc)\mathbf{J}$ giving us back the multiplication rule for complex numbers from matrix multiplication. Also note that complex conjugation simply corresponds to transposition. Also, the determinant encodes the square of the absolute value, as you can check easily.

If editing were not so painfully slow at the moment, I'd have loved to elaborate further by plugging in complex values and ending up with the quaternions and the Pauli matrices but for the moment a simple wikipedia link will have to do. See in particular the passage on matrix representations of the quaternions.

  • 0
    Geometrically looking at this observation (along with matching up properties of two-dimensional vectors and complex numbers) was the key for me to appreciate the correspondence as well.2011-07-16
13

The connection is due to the fact that this matrix has eigenvalues $\mathrm i$ and $-\mathrm i$. Since the eigenvalues of the square of a matrix are the squares of its eigenvalues, the eigenvalues of the square are both $-1$, and thus the square must be $-I$. The same is true for any square matrix of any dimension that has only eigenvalues $\pm\mathrm i$.

6

One of the first algebra courses I took as a student defined $i$ as the matrix $\left( \begin{array}{clcr} 0 & 1\\-1 & 0 \end{array} \right)$ and went on to define the complex numbers in terms of the appropriate $2 \times 2$ real matrices. I had seen complex numbers before, of course, with the usual $i^2 = -1$ definition, but I found the matrix definition much more satisfying- you didn't need to "invent" an element with $i^2 = -1$, you could see it with your own eyes, in a familiar context.

In a similar spirit, you might like to think about identifying the division algebra of quaternions with the ring of $2 \times 2$ complex matrices of the form $\left(\begin{array}{clcr} z & w\\ -\overline{w} & \overline{z} \end{array} \right)$. I find this much easier to remember than the definition of the quaternions as $ 4 \times 4$ real matrices.

  • 0
    Ah, I see. That makes sense. Thanks!2011-07-13
0

think of $\mathbb{C}$ as $\mathbb{R}^2$. you can identify the complex numbers with the linear operators corresponding to their multiplication. this is all pretty much determined by the action of multiplication by $i$, so what does multiplication by $i$ do to the basis $(1,0),(0,1)$? we get the matirix $ \left( \begin{array}{cc} 0&-1\\ 1&0\\ \end{array} \right) $ so we get an identification (with a little work) $ a+bi\mapsto \left( \begin{array}{cc} a&-b\\ b&a\\ \end{array} \right) $

0

The matrix $ \begin{pmatrix} 0&-1\\ 1&0 \end{pmatrix} $ is the Companion_matrix to the polynomial equation $ x^2 + 1 = 0 $ The solution to the polynomial equation is of course $\pm i$. Diagonalizing the companion matrix (computing it's eigenvalues) therefore is equivalent to the solution of the polynomial equation.