16
$\begingroup$

$\def\sign{\operatorname{sign}}$ For homework, I am trying to show that $\sign:S_n \to \{\pm 1\}$ is multiplicative, i.e. that for any permutations $\sigma_1,\sigma_2$ we have $$\sign(\sigma_1 \sigma_2) = \sign\sigma_1 \sign\sigma_2.$$

The definition for $\sign$ that I am using is that if $\sigma = \gamma_1\cdots \gamma_k$ is the cycle decomposition of $\sigma \in S_n$ and $\ell_1,\ldots,\ell_n$ are the cycle lengths of $\gamma_1,\ldots,\gamma_k$ respectively, then

$$\sign(\sigma):= (-1)^{\ell_1-1}\cdots(-1)^{\ell_k-1}.$$

I showed first that the formula holds for two transpositions. Then I showed that it holds for a transposition and a cycle.

However, I got stuck trying to show that the formula holds for a transposition and a product of two cycles, i.e. $\sigma_1 = \tau$ and $\sigma_2 = \gamma_1\gamma_2$. I feel like this case is much more complicated than the others which makes me think I am taking the wrong approach.

If $\tau,\gamma_1,\gamma_2$ are all disjoint then the formula holds trivially because then $\gamma_1\gamma_2\tau$ is the cycle decomposition of $\sigma_1\sigma_2$. Otherwise they are not all disjoint, at which point it seems to get complicated very quickly and I don't know how to proceed.

Would someone please help me understand the best approach to this proof?

  • 1
    Is there a reason you are starting from this definition in terms of disjoint cycle length? That strikes me as a very difficult (and odd) place to start.2011-11-08
  • 7
    A well-deserved (+1) for effort.2011-11-08
  • 4
    +1 This question shows research effort; it is useful and clear.2011-11-08
  • 0
    @Bill Cook: I don't have a good reason for it. This was the definition we were given in class and it has been an ample source of confusion for me and my peers. We did prove first that every permutation has a unique cycle decomposition, so at least we know that $\operatorname{sign}$ is well defined.2011-11-08
  • 2
    Please change the target group of $\mathrm{sgn}$ on your first line. ;)2011-11-08
  • 0
    Thanks, big typo on my part!2011-11-08
  • 0
    I would think the simplest thing is to show that the parity of a permutation is well defined, and that your definition of sign agrees with parity. See, e.g., [this question](http://math.stackexchange.com/questions/46403/alternative-proof-that-the-parity-of-permutation-is-well-defined/46407#46407)2011-11-08

2 Answers 2

4

It's not really that difficult to show it for a transposition that straddles two disjoint cycles. The key is to see that if we have an $n$-element cycle disjoint from a $k$-element cycle and compose with a transposition of one element in each cycle, then you always get an $(n+k)$-element cycle -- there's essentially only one way the transposition can connect the cycles. And then $(-1)^{n+k-1}=-1\times (-1)^{n-1}(-1)^{k-1}$.

Together with showing it for a transposition and a cycle that contains both transposed elements (which I imagine is exactly the reverse of the above case), this is all you need to show (by induction) that if a permutation is a product of $n$ transpositions, then its sign is $(-1)^n$.

For a general product of permutations, just decompose each factor into transpositions (you know you can always to this, right?) and count transpositions in each of them.

  • 0
    I proved it for a product of two disjoint cycles and a transposition, but I don't know how to extend it if the cycles are not disjoint. Yes I could decompose them into disjoint cycles but then I have increased the number of factors so I can't use induction.2011-11-08
  • 1
    Your induction (for my middle paragraph) should be on the number of transpositions. Then you're free to do a full disjoint cycle decomposition of the product of the first $n-1$ transpositions in the beginning of the induction step.2011-11-08
  • 0
    +1: This is my favorite way of proving this result. @Kb100: Think of it this way. Write $\sigma_2$ as a product of disjoint cycles, and $\sigma_1$ as a product of transpositions. Each time you multiply a product of disjoint cycles with a transposition (and represent the result as a product of disjoint cycles) you either "glue together" two cycles or "split one cycle into two".2011-11-08
  • 0
    I really appreciate the help you guys are giving me, but I still don't seem to understand. The question is posed in such a way that I think I should be able to use the result of the previous questions to prove it. How does knowing the formula for a transposition and two disjoint cycles help me for a transposition and an arbitrary permutation? Of course an arbitrary permutation can be written as a product of disjoint cycles, so is it possible to induct on the number of disjoint cycles in the product? If so how would I go about it?2011-11-09
  • 1
    If you have $\tau\sigma$ where $\tau$ is a transposition and $\sigma$ an arbitrary permutation, then decompose $\sigma$ into disjoint cycles $\gamma_1\gamma_2\cdots\gamma_k$. There can be at most two cycles that contain the two elements $\tau$ exchanges; arrange the $\gamma$s such that those are $\gamma_1$ and $\gamma_2$. Now re-parenthesize to $(\tau\gamma_1\gamma_2)(\gamma_3\cdots\gamma_k)$ and apply the two-cycle reasoning to the first parenthesis. Because all of $\tau\gamma_1\gamma_2$ leave all of the _other_ cycles alone, you can then decompose their product separately.2011-11-09
1

Hennings answer is a great way to go. I thought you might find this alternative viewpoint of interest as well...

Each permutation can be realized by a permutation matrix (let $\sigma \in S_n$ and take the $n\times n$ identity and then send row $i$ to row $\sigma(i)$). It's not hard to show that if $A$ and $B$ are matrices corresponding to $\sigma$ and $\tau$, then $AB$ corresponds to $\sigma \tau$.

Now recall that swapping rows changes the sign of the determinant. From here it's not hard to show that the sign of a permutation is just the determinant of the corresponding matrix.

Now the homomorphism property of $\mathrm{sign}$ comes from: $\mathrm{det}(AB)=\mathrm{det}(A)\mathrm{det}(B)$ (the fact that the determinant is a homomorphism).

  • 0
    The theory of permutation sign is used to build the theory of determinants. The last two paragraphs rely on facts about permutation sign having been used to prove the corresponding facts about determinants.2011-11-08
  • 0
    @zyx Yes and no. It depends on where you want to start. Either can be built from the other.2011-11-08
  • 1
    @zyx: that's one approach commonly taken, but far from the only one; for instance, the definition of the determinant as a (signed) volume for the corresponding linear transform is a perfectly good one, can be used to prove most of the facts about the determinant, and makes the proof that swapping rows changes the sign of the determinant easy.2011-11-08
  • 0
    ... or you can define the determinant inductively by expansion by minors and prove facts about it from that.2011-11-08
  • 1
    @Steven, other than the embedding of permutations into matrices, I don't think the argument can be carried out without the logical circle of proving the same facts about permutation sign in order to have the necessary facts about matrices or volumes. For example, multilinear algebra proves without calculation that determinant is multiplicative, but to show that a swap has determinant -1 (and not zero) one needs a basis for the top exterior power and the combinatorics of inversions duly makes its appearance.2011-11-08
  • 2
    @p.s. True. That is the "there's no free lunch" principle in action. Using determinants here makes some facts about permutations more accessible, but in many ways just pushes difficulties off to another subject. If you're going to prove everything from the ground up, you must *pay* at some point :)2011-11-08
  • 0
    @zyx By the way, I'm not advocating that this is the *right* way to build up the theory of permutations. When teaching this stuff, I usually start with the definition of sign in terms of transpositions and then build determinants on top of that stuff.2011-11-08
  • 0
    @zyx Also, as Henning mentioned, you can define determinants and prove basic facts about elementary operations without using permutations at all (just using expansion by minors). Circularity is easily avoided.2011-11-08
  • 1
    @Henning and Bill, the minor expansion of the determinant is the same as defining sign to be the parity of the number of inversions in the permutation. Proving the necessary properties of Det will simply juggle signs and indices in a manner that is identical to the gymnastics of proving that (-1)^(number of inversions) has the claimed properties of permutation sign, but with the added notational complication of working with all the entries of the matrix.2011-11-08
  • 0
    @zyx Last post on this issue: I'm an not saying one *should* define things this way, only that one *can*. I have taught linear algebra classes and rigorously proved all of the basic facts about determinants using expansions by minors and never once mentioning permutations. I agree it is not pretty or efficient, but it *can* be done. This has the advantage of avoiding the discussion of permutations which some audiences (i.e. engineers) tend to appreciate.2011-11-08
  • 0
    I don't think that replacing the word "permutation" by the words "n-tuple of indices" counts as avoiding mention of permutations, except in a superficial linguistic sense. (Also, I see no indication that anybody commented about *should* or interpreted your words in that light.)2011-11-08