36
$\begingroup$

Possible Duplicate:
Relation of this antisymmetric matrix $r = \left(\begin{smallmatrix}0 &1\\-1 & 0\end{smallmatrix}\right)$ to $i$

On Wikipedia, it says that:

Matrix representation of complex numbers
Complex numbers $z=a+ib$ can also be represented by $2\times2$ matrices that have the following form: $\pmatrix{a&-b\\b&a}$

I don't understand why they can be represented by these matrices or where these matrices come from.

  • 1
    @SalechAlhasov Your link leads merely to a login page. Maybe you can post a new link?2015-08-19

8 Answers 8

42

No-one seems to have mentioned it explicitly, so I will. The matrix $J = \left( \begin{array}{clcr} 0 & -1\\1 & 0 \end{array} \right)$ satisfies $J^{2} = -I,$ where $I$ is the $2 \times 2$ identity matrix (in fact, this is because $J$ has eigenvalues $i$ and $-i$, but let us put that aside for one moment). Hence there really is no difference between the matrix $aI + bJ$ and the complex number $a +bi.$

  • 0
    @AkivaWeinberger That's the modern approach. Matrices were a more familiar terrain back then.2016-01-13
17

Look at the arithmetic operations and their actions. With + and *, these matrices form a field. And we have the isomorphism $a + ib \mapsto \left[\matrix{a&-b\cr b &a}\right].$

  • 0
    I think this is a better answer because it points out the isomorphism.2013-12-11
13

As to the "where did it come from?", rather than verifying that it does work: this is a special case of a "rational representation" of a bigger collection of "numbers" as matrices with entries in a smaller collection. ("Fields" or "rings", properly, but it's not clear what our context here is.)

That is, the collection of complex numbers is a two-dimensional real vector space, and multiplication by $a+bi$ is a real-linear map of $\mathbb C$ to itself, so, with respect to any $\mathbb R$-basis of $\mathbb C$, there'll be a corresponding matrix. For example, with $\mathbb R$-basis $e_1=1,\,e_2=i$, $ (a+bi)\cdot e_1 = a+bi = ae_1+be_2 \hskip40pt (a+bi)\cdot e_2 = (a+bi)i = -b+ai = -be_1+ae_2 $ So $ \pmatrix{e_1 \cr e_2}\cdot (a+bi) \;=\; \pmatrix{a & b \cr -b & a}\pmatrix{e_1\cr e_2} $ Oop, I guessed wrong, and got the $b$ and $-b$ interchanged. Maybe using $e_2=-i$ instead will work... :)

But this is the way one finds such representations.

  • 1
    @RudytheReindeer, you are right that the $\pm b$ can be interchanged without harm, which amounts to switching $\pm i$. My "guessed wrong" was only that I was aiming to "hit" one of the two choices, but got the other one by my choices of convention. Doesn't really matter.2015-08-19
13

What I think of a complex number is a scaling and 2D rotation operation, where the absolute value $r$ is scaling factor, and the phase $\theta$ is the rotation angle.

The same operation can be described by scalar multiplication of a rotation matrix as $r\begin{pmatrix}\cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}$

Since $r e^{i\theta}=r\cos \theta + ir \sin \theta = a +ib$, we have $a +ib = \begin{pmatrix}a & -b \\ b & a \end{pmatrix}$

10

I had something written up on this lying around. The $-$ sign is off, but it's more or less the same, I hope it helps.

Let $M$ denote the set of such matrices. Define a function $\phi\colon M\to\mathbb{C}$ by $ \begin{pmatrix} \alpha & \beta \\ -\beta & \alpha\end{pmatrix}\mapsto \alpha+i\beta. $ Note that this function has inverse $\phi^{-1}$ defined by $\alpha+i\beta\mapsto\begin{pmatrix} \alpha & \beta \\ -\beta & \alpha\end{pmatrix}$. This function is well defined, since $\alpha+i\beta=\gamma+i\delta$ if and only if $\alpha=\gamma$ and $\beta=\delta$, and thus it is never the case that a complex number can be written in two distinct ways with different real part and different imaginary part. So $\phi$ is invertible.

Now let $ A=\begin{pmatrix} \alpha & \beta \\ -\beta & \alpha\end{pmatrix},\qquad B=\begin{pmatrix} \gamma & \delta \\ -\delta & \gamma\end{pmatrix}. $ Then $ \phi(A+B)=\phi\begin{pmatrix} \alpha+\gamma & \beta+\delta \\ -\beta-\delta & \alpha+\delta\end{pmatrix}=(\alpha+\gamma)+i(\beta+\delta)=(\alpha+i\beta)+(\gamma+i\delta)=\phi(A)+\phi(B). $ Also, $ \phi(AB)=\phi\begin{pmatrix} \alpha\gamma-\beta\delta & \alpha\delta+\beta\gamma \\ -\beta\gamma-\alpha-\delta & -\beta\delta+\alpha\gamma\end{pmatrix}=(\alpha\gamma-\beta\delta)+i(\alpha\delta+\beta\gamma)=(\alpha+i\beta)(\gamma+i\delta)=\phi(A)\phi(B). $ So $\phi$ respects addition and multiplication. Lastly, $\phi(I_2)=1$, so $\phi$ also respects the multiplicative identity. Hence $\phi$ is a field isomorphism, so $M$ and $\mathbb{C}$ are isomorphic as fields.

8

The matrix rep of $\rm\:\alpha = a+b\,{\it i}\:$ is simply the matrix representation of the $\:\Bbb R$-linear map $\rm\:x\to \alpha\, x\:$ viewing $\,\Bbb C\cong \Bbb R^2$ as vector space over $\,\Bbb R.\,$ Computing the coefficients of $\,\alpha\,$ wrt to the basis $\,[1,\,{\it i}\,]^T\:$

$\rm (a+b\,{\it i}\,) \left[ \begin{array}{c} 1 \\ {\it i} \end{array} \right] \,=\, \left[\begin{array}{r}\rm a+b\,{\it i}\\\rm -b+a\,{\it i} \end{array} \right] \,=\, \left[\begin{array}{rr}\rm a &\rm b\\\rm -b &\rm a \end{array} \right] \left[\begin{array}{c} 1 \\ {\it i} \end{array} \right]$

As above, any ring may be viewed as a ring of linear maps on its additive group (the so-called left-regular representation). Informally, simply view each element of the ring as a $1\!\times\! 1$ matrix, with the usual matrix operations. This is a ring-theoretic analog of the Cayley representation of a group via permutations on its underlying set, by viewing each $\,\alpha\,$ as a permutation $\rm\,x\to\alpha\,x.$

When, as above, the ring has the further structure of an $\rm\,n$-dimensional vector space over a field, then, wrt a basis of the vector space, the linear maps $\rm\:x\to \alpha\, x\:$ are representable as $\rm\,n\!\times\!n\,$ matrices; e.g. any algebraic field extension of degree $\rm\,n.\,$ Above is the special case $\rm n=2.$

  • 0
    See also [this answer](http://math.stackexchange.com/a/247843/23500) for a finite field analogue in $\,\Bbb F_9 \cong \Bbb F_3[i].\ \ $2013-02-05
4

The matrices $I=\begin{bmatrix}1&0\\0&1\end{bmatrix}$ and $J=\begin{bmatrix}0&-1\\1&0\end{bmatrix}$ commute (everything commutes with $I$), and $J^2=-I$. Everything else follows from the standard properties (associativity, commutativity, distributivity, etc.) that matrix operations have.

Thus, $aI+bJ=\begin{bmatrix}a&-b\\b&a\end{bmatrix}$ behaves exactly like $a+bi$ under addition, multiplication, etc.

  • 0
    @Dr.MV: Indeed! Swapping $J$ and $J^T$ gives the same isomorphism that swapping $i$ and $-i$ does.2016-07-08
2

Since you put the tag quaternions, let me say a bit more about performing identifications like that:

Recall the quaternions $\mathcal{Q}$ is the group consisting of elements $\{\pm1, \pm \hat{i}, \pm \hat{j}, \pm \hat{k}\}$ equipped with multiplication that satisfies the rules according to the diagram

$\hat{i} \rightarrow \hat{j} \rightarrow \hat{k}.$

Now what is more interesting is that you can let $\mathcal{Q}$ become a four dimensional real vector space with basis $\{1,\hat{i},\hat{j},\hat{k}\}$ equipped with an $\Bbb{R}$ - bilinear multiplication map that satisfies the rules above. You can also define the norm of a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ as

$||a + b\hat{i} + c\hat{j} + d\hat{k}|| = a^2 + b^2 + c^2 + d^2.$

Now if you consider $\mathcal{Q}^{\times}$, the set of all unit quaternions you can identify $\mathcal{Q}^{\times}$ with $\textrm{SU}(2)$ as a group and as a topological space. How do we do this identification? Well it's not very hard. Recall that

$\textrm{SU}(2) = \left\{ \left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right) |\hspace{3mm} a,b,c,d \in \Bbb{R}, \hspace{3mm} a^2 + b^2 + c^2 + d^2 = 1 \right\}.$

So you now make an ansatz (german for educated guess) that the identification we are going to make is via the map $f$ that sends a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ to the matrix $\left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right).$

It is easy to see that $f$ is a well-defined group isomorphism by an algebra bash and it is also clear that $f$ is a homeomorphism. In summary, the point I wish to make is that these identifications give us a useful way to interpret things. For example, instead of interpreting $\textrm{SU}(2)$ as boring old matrices that you say "meh" to you now have a geometric understanding of what $\textrm{SU}(2)$. You can think about each matrix as being a point on the sphere $S^3$ in 4-space! How rad is that?

On the other hand when you say $\Bbb{R}^4$ has now basis elements consisting of $\{1,\hat{i},\hat{j},\hat{k}\}$, you have given $\Bbb{R}^4$ a multiplication structure and it becomes not just an $\Bbb{R}$ - module but a module over itself.

  • 0
    Actually I ask this question because I want to know how to get the real-number matrix form of quaternions, thanks.2012-08-12