3
$\begingroup$

I vaguely know that by looking at the inversions of a permutation, you can write down the reduced word expressing the permutation as a product of adjacent transpositions $s_i = (i,i+1)$. However, I am a little unclear on what order you check the inversions, or if there is a simple way to do it without multiplying your original permutation by all the transpositions you've found so far.

Is there a simple algorithmic description of how to convert a permutation's inversion set to its reduced word?

Secondly, I'd actually like to do this in the signed permutation group (whose elements are monomial matrices with nonzero entries taken from $\{1,-1\}$; a monomial matrix is like a permutation matrix except the nonzero entries don't have to be $1$).

I'm a little unclear on when you look for the sign changes, but possibly this is due in large part to not having a specific algorithm for the plain permutations.

Is there a simple algorithmic description of how to convert a signed permutation's inversion set and positions of $-1$s to its reduced word?

Example:

The permutation $\pi = (1,2,3) = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \end{bmatrix}$ has inversion set $\{ \{1,3\}, \{2,3\} \}$ since $1 < 3$ but $\pi(3) < \pi(1)$ and $2 < 3$ but $\pi(3) < \pi(2)$. The $\{2,3\}$ is pretty clearly caused by $(2,3)$ and so we need to multiply by something to invert $\{(1,3)\}$. Since already know $(2,3)$ is used, that moves the $3$ to a $2$, and $\{1,2\}$ is caused by $(1,2)$. Now depending on which order you apply your functions, we get the product of $(1,2)$ and $(2,3)$ is $(1,2,3)$.

How did I know to use $\{2,3\}$ first? Should I really transform the inversion set by every adjacent transposition I discover?

What if it was $\sigma = (1,\bar 2,3) = \begin{bmatrix} . & -1 & . \\ . & . & 1 \\ 1 & . & . \end{bmatrix}$ with the same inversion set as before, but with the $2$nd column negated? When do we take of the negative?

  • 0
    This is a special case of a length reduction in crystallographic finite reflection groups. A useful way of expressing this in terms of the corresponding root system goes as follows. If \sigma(\alpha)<0 for some simple root $\alpha$, then $\ell(\sigma s_\alpha)=\ell(\sigma)-1$. Rinse. Repeat. As @Henning pointed out, in the case of a root system of type $A_n$ this leads to bubble sorting. The other root systems lead to slight generalization. If you also allow reflections with respect to hyperplanes determined by non-simple roots, then you get analogues of shell sorting.2012-07-27

1 Answers 1

3

While counting inversions is a great way to calculate the (Coxeter) length, it turns out a related notion called descents is better for factorizing the element. (Note that my answer is basically unrelated to “inversion length”, which appears to be a hot topic in computational biology).

For a permutation $\sigma$, the Coxeter length $\ell(\sigma)$ is the length of the shortest sequence of adjacent transpositions $s_i=(i,i+1)$ whose product is $\sigma$. For instance $\ell(s_1 \cdot s_2 \cdot s_3) = 3$, but $\ell(s_1 \cdot s_1 \cdot s_2) = \ell(s_2) = 1$. For a signed permutation (a permutation of $\{\pm1,\ldots,\pm n\}$ such that $\sigma(-i)=-\sigma(i)$), we define the Coxeter length similarly, except there is now one more generator allowed: $s_0 = [\bar1]$ which takes $1$ to $-1$ and fixes $i$ if $|i| \geq 2$.

The Coxeter length can be calculating from so-called inversion sets:

$\begin{array}{rcl} \ell(\sigma) &=& \left| \{(i,j) : 1 \leq i < j \leq n \mid \sigma(i) > \sigma(j) \}\right| \\ &+& \left| \{(i,j) : 1 \leq i \leq j \leq n \mid \sigma(-i) > \sigma(j) \} \right| \end{array}$ where the bottom line is defined to be $0$ for a plain permutation (which agrees with the value calculated from the signed permutation that agrees with $\sigma$ on $\{1,\ldots,n\}$).

Any element in the first of those inversion sets is a place where the permutation is “out of order”. One can either swap $i$ and $j$ on the input side, or swap $\sigma(i)$ and $\sigma(j)$ on the output side.

For unsigned permutations this leads to a fairly obvious factorization method. The key observation is that we can always find an inversion where the two numbers are adjacent. On the output side this seems a bit fantastical, but on the input side it is obvious: in order to check if a sequence is sorted, you just move along checking if $\sigma(i) < \sigma(i+1)$. If all of those check out, then the transitive property of $<$ implies everything checks out (and so you have the identity permutation!). Otherwise we have our magical $\{i,i+1\}$ we needed to pull off one factor of $s_i$ on the input side.

Thus we define the (input-side) descent set to be: $\newcommand{\des}{\operatorname{des}} \des(\sigma) = \{ i : 1 \leq i \leq n \mid \sigma(i) > \sigma(i+1) \}$

And we have the relation that $\ell( s_i \sigma ) < \ell(\sigma)$, that is that an input-side $s_i$ is going to be cancelled by a hidden $s_i$ that is already there, if and only if $i \in \des(\sigma)$. Thus every reduced factorization of $\sigma$ is found by pulling out one generator $s_i$ from each descent set $\des(\sigma)$ and replacing $\sigma$ with $s_i \cdot \sigma$. [The actual replacement need not be done in code, compare to partial pivoting elimination versus explicitly moving the rows around in memory.]

For signed permutations, one gets the “obvious” thing (which is extremely close to my suspicion, but my suspicion radically mishandled negative numbers):

$\des(\sigma) = \{ i : 0 \leq i \leq n ~|~ \sigma(i) > \sigma(i+1) \}$

where we extend signed permutation to $\{-n,\ldots,n\}$ by defining $\sigma(0)=0$. Again, this easily becomes a full factorization algorithm by replacing $\sigma$ by $s_i \sigma$ for $i \in \des(\sigma)$ until $\des(\sigma)$ is empty.

This is discussed (probably with the inputs on the right) in Bjørner–Brenti (2005), page 248 and surrounds.

  • Bjørner, Anders; Brenti, Francesco. Combinatorics of Coxeter groups. Graduate Texts in Mathematics, 231. Springer, New York, 2005. xiv+363 pp. ISBN: 978-3540-442387; 3-540-44238-3 MR2133266 DOI:10.1007/3-540-27596-7
  • 0
    From (9.22) in the book rsp. Claim (a) and (b) in my answer you can easily deduce that $\mathop{des}(\sigma)$ is essentially the same as $\mathop{D}(\sigma)\cap R$ (with $R$ the set of Coxeter generators): just identify $0$ with $(1\; -1)$ and i>0 with $(i\; i+1)$.2012-07-30