4
$\begingroup$

The question is, does the fact $$ \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0& 0\\ 0 & 1 &0 \end{array}\right)^{2}=0, \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0& 0\\ 1 & 0 &0 \end{array}\right)^{2}=0$$ influence $SU(3)$'s irreducible representation?

Background: To simplify I am working over the case of $SU(3)$. For all $sl_{n}\mathbb{C}$ any irreducible representation is generated by repeated applying negative weight elements to the highest weight vector $v$ (standard reference, Serre). But when I try to draw the weight diagram starting from any given highest weight vector $v$ for $H$, say (10, 20), then repeating applying the negative weight element like this one $ \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0& 0\\ 0 & 1 &0 \end{array}\right)$ always give me zero because $ \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0& 0\\ 0 & 1 &0 \end{array}\right)^{2}=0$. Now elementary analysis showed that $v$'s image under the action of $ \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0& 0\\ 0 & 1 &0 \end{array}\right)$ and $ \left(\begin{array}{ccc} 0 & 0 & 0\\ 1 & 0& 0\\ 0 & 0 &0 \end{array}\right)$ actually generates $V$. So $V$'s weight diagram must be very small, which is counter-intuitive for me. I am hoping I may claim there exists irreducible subrepresentation of $sl_{2}\mathbb{C}$ by similar arguments using either of the matrices repeatedly until the image vanishes. I think I must be confused with something very basic, like the embedding of $sl_{3}\mathbb{C}$ in $gl_{n}\mathbb{C}$ or something.

  • 5
    There seems to be a lot of confusion here. Your second matrix (with a "1" in the 3,1-position), call it $E_{3,1}$ is does not act as $0$ (in general). Next, your first matrix, call it $E_{3,2}$ does not act as $0$ when applied twice. It is true that as a matrix: "$E_{3,2}^2=0$" but this matrix only represents the action of $\mathfrak{sl}_3(\mathbb{C})$ on $\mathbb{C}^3$ (the standard rep.). Be careful!2012-01-17
  • 2
    Your question is not clearly stated (what do you mean by a highest weight vector of $(10,20)$ for instance), but the main point of confusion seems to be that you think that acting by a matrix in $SU(3)$ means multiplying a vector by that matrix. That is true in the *defining representation* of $SU(3)$ on the space $\mathbb C^3$, but not in general representations (where the space acted upon is usually not of dimension $3$, so just multiplying by a matrix makes no sense).2012-01-17
  • 0
    The roots of $\mathfrak{sl}_3$ are $\pm \alpha_1, \pm \alpha_2, \pm (\alpha_1+\alpha_2)$. $E_{3,1}$ is in the $-\alpha_1-\alpha_2$ root space and $E_{3,2}$ is in the $-\alpha_2$ root space.2012-01-17
  • 0
    @BillCook: Yes I mean what you suggested precisely. My confusion comes from the fact we have the commutator relationship which generates the elements in the maximal toral algebra. Now by a representation of $sl_{n}\mathbb{C}$ we are supposed to have a homomorphism between $sl_{n}\mathbb{C}$ and $gl(v)$ of $V$ for some dimension. So I feel (without appropriate justification as it is not part of the definition) that matrix multiplication should satisfy $\rho(xy)=\rho(x)\rho(y)$ as well. But it seems not true.2012-01-17
  • 0
    @MarcvanLeeuwen: Thanks a lot for the clarification. By (10, 20) I mean $v\in V$ has eigenvalue for matrices with diagonal $(1,-1,0)$ and $(0,1,-1)$ (10, 20) respectively. You are right to point out I was being confused.2012-01-17
  • 0
    Exactly, each representation will give rise to *different* matrices. So while for the standard representation ($\mathfrak{sl}_3(\mathbb{C})$ acting on $\mathbb{C}^3$) you have $x \cdot y \cdot {\bf v} = (xy) \cdot {\bf v}$, in general there is no guarantee. :)2012-01-17
  • 2
    @ChangweiZhou, no the identity $\rho(xy)=\rho(x)\rho(y)$ does not always hold. The Lie bracket corresponds to the commutator though, so $$\rho([x,y])=\rho(x)\rho(y)-\rho(y)\rho(x).$$2012-01-18
  • 0
    @Marc van leeuwen: I want to ask if the weight $(m,n)$ (for diagonals $h_{1}=(1,-1,0)$, $h_{2}=(0,1,-1)$) goes under Weyl group to $(m-n,3n)$ and $(-m,m+n)$. I worked out this myself but I feel it is hard to believe.2012-01-19
  • 0
    @ChangweiZhou: I think you mean $(-m,m+n)$ and $(m+n,-n)$ are in the Weyl group orbit of $(m,n)$ (the three remaining elements are $(-m-n,m)$, $(n,-m-n)$, and $(-n,-m)$). Applying nilpotent elements for (negated) simple roots lowers the weight by adding $(-2,+1)$ respectively $(+1,-2)$. You can compute (among other things) full characters at the [LiE online service](http://www-math.univ-poitiers.fr/~maavl/LiE/form.html); the type is A2.2012-01-19
  • 0
    @MarcvanLeeuwen: Thank you. Obviously the things I worked out are well-known to other people. I have corrected my mistake.2012-01-19

2 Answers 2

7

The short answer to your confusion is that what you think should hold, $\rho(xy)=\rho(x)\rho(y)$, does indeed hold for the representation of the Lie group, but not for the representation of the Lie algebra (indeed the product $xy$ makes no sense in a general Lie algebra). What holds for the Lie algebra is obtained by differentiation from what holds for the Lie group, and this gives the commutator relation of Jyrki's comment. Also note that for $G=SU(3)$, the Lie algebra $sl_{3}\mathbb{C}$ is the complexified Lie algebra; the nilpotent matrices you wrote do not live in the real Lie algebra of the compact real group $SU(3)$. The fact that everything (I mean Lie group and algebra, not the universal enveloping algebra) embeds in the same matrix ring is really a red herring I think.

6

For a representation $\pi$ of a Lie algebra $\mathfrak g$, it is natural to want $\pi(x)\pi(y)=\pi(x\cdot y)$, for some notion of "product", but there are issues. Of course, the product $\pi(x)\pi(y)$ in the endomorphism algebra of the representation space is the usual product of that _associative_algebra_. Then, the product in $\mathfrak g$ cannot be the Lie bracket $[x,y]$, because $\pi$ respects it, and, instead $\pi([x,y])=[\pi(x),\pi(y)]=\pi(x)\pi(y)-\pi(y)\pi(x)$.

But we want a product "on $\mathfrak g$" that maps to the composition of endomorphisms on a repn space. Indeed, there is a unique associative algebra $U\mathfrak g$ into which $\mathfrak g$ imbeds linearly, such that every Lie algebra homomorphism $f:\mathfrak g \rightarrow A$ to an associative algebra $A$ with Lie bracket $[a,b]=ab-ba$ factors through $U\mathfrak g$ by an associative algebra map $F:U\mathfrak g\rightarrow A$. This is the "universal enveloping algebra" of $\mathfrak g$. (It is characterized by this mapping property, and usually constructed via the "tensor algebra" of $\mathfrak g$.)

Then the conceptual trap is to think that "surely" for Lie algebras $\mathfrak g$ that are matrices, the multiplication in $U\mathfrak g$ is matrix multiplication. "What else could it be?" But it isn't matrix multiplication (as examples in the comments illustrate). The universal enveloping algebra's multiplication is not much related to matrix multiplication. It can't be, because $U\mathfrak g$ needs to be able to map to many large associative algebras. The Poincare-Birkhoff-Witt theorem proves decisively that the enveloping algebra is infinite-dimensional...

(I remember driving myself crazy over this sort of issue...)