12
$\begingroup$

Given the elementary symmetric polynomials $e_k(X_1,X_2,...,X_N)$ generated via $ \prod_{k=1}^{N} (t+X_k) = e_0t^N + e_1t^{N-1} + \cdots + e_N. $ How can one get the monomial symmetric functions $m_\lambda(X_1,X_2,...,X_N)$ as products and sums in $e_k$? For example: $N=4$ $ m_{(2,1,1,0)}=X_1^2X_2X_3 + \text{all permutations}= e_3\cdot e_1 - 4 e_4 , $ $ m_{(2,2,0,0)}=X_1^2X_2^2 + ... = e_2^2-6e_4 - 2m_{(2,1,1,0)}=e_2^2 - 2e_3\cdot e_1 +2e_4 $ It seems clear that the products on the RHS run over all partitions $\mu$ of $N$. For each $\lambda$ there should be a set of integers $C_{\lambda\mu}$ such that $ m_\lambda = \sum_\mu C_{\lambda \mu} \prod_j e_{\mu_j} $ is true. Putting all values of $c_{\lambda \mu}$ together, you get the following equation: $ \left( \begin{matrix} m_{4,0,0,0}\\ m_{3,1,0,0}\\ m_{2,2,0,0}\\ m_{2,1,1,0}\\ m_{1,1,1,1}\\ \end{matrix} \right)= \left( \begin{matrix} -4&+4&+2&-4&+1\\ +4&-1&-2&+1&0\\ +2&-2&+1&0&0\\ -4&+1&0&0&0\\ +1&0&0&0&0\\ \end{matrix} \right)\left( \begin{matrix} e_{4}\\ e_3e_1\\ e_2^2\\ e_2e_1^2\\ e_1^4\\ \end{matrix} \right) . $ The transition matrix $C_4$ is symmetric. This also holds for $N=3$, where $C_3$ is $ \left( \begin{matrix} +3&-3&+1\\ -3&+1&0\\ +1&0&0\\ \end{matrix} \right) $ and for $N=2$ we get $ \left( \begin{matrix} -2&+1\\ +1&0\\ \end{matrix} \right). $

The main question is, if there exists a general formula for the entries of the matrices? For a given $N$, is it a kind of composition of matrices $C_{k and some few other matrices?

Beside that, the following is also of interest: The matrices so far are symmetric and their entries sum up to $0$. Is this true in general? Is this related to (conjugate) Young Tableaux?

Balls&Boxes If we bring the matrix on the LHS we get: $ \left( \begin{matrix} 0 &0&0&0&1\\ 0&0&0&1&4\\ 0&0&1&2&6\\ 0&1&2&5&12\\ 1&4&6&12&24\\ \end{matrix} \right) \left( \begin{matrix} m_{4,0,0,0}\\ m_{3,1,0,0}\\ m_{2,2,0,0}\\ m_{2,1,1,0}\\ m_{1,1,1,1}\\ \end{matrix} \right)= \left( \begin{matrix} e_{4}\\ e_3e_1\\ e_2^2\\ e_2e_1^2\\ e_1^4\\ \end{matrix} \right)\tag{*} . $

Maybe it's easier to interpret these values, since they are all positive, in terms of balls, that have to be put into boxes, according to the following rules :

You are given $N$ balls. Your balls are now divided into parts (and bless god that, this is a math forum :-) according to a partition $\mu$. These are the products of $e_k$'s. Now you are asked to put the balls partition-by-partition into a $N$ boxes. It is not allowed to put more than 1 ball in a box for the current partition.

The goal is to achieve a certain distribution of balls among the boxes, given by $\lambda$. These are the $m_\lambda$.

Power Sums Or would it help to express $e_k$ as power sums via Newton-Girard formulas? Here is the worked out example:

$ \left( \begin{matrix} m_{4,0,0,0}\\ m_{3,1,0,0}\\ m_{2,2,0,0}\\ m_{2,1,1,0}\\ m_{1,1,1,1}\\ \end{matrix} \right)= \left( \begin{matrix} 0& 0& 0& 0& 1\\ 0& 0& 0& 1 &-1\\ 0& 0& 1/2& 0& -1/2\\ 0& 1/2& -1/2& -1& 1\\ 1/24& -1/4 &+1/8 &1/3 &-1/4\\ \end{matrix} \right) \left( \begin{matrix} p_1^4\\ p_2p_1^2\\ p_2^2\\ p_3p_1\\ p_4^1\\ \end{matrix} \right) $

  • 0
    See Theorem 1 (ii) in: Andrius Kulikauskas, Jeffrey Remmel, *Lyndon Words and Transition Matrices between Elementary, Homogeneous and Monomial Symmetric Functions*, The Electronic Journal of Combinatorics, Volume 13 (2006), Research Paper #R18. This gives an alternating-sum expressions in terms of Lyndon words and "bi-brick permutations". (Warning: Unless I am misreading the paper, the definition of Lyndon words given there is wrong, and you should read any other source on Lyndon words for this definition.)2015-08-13

2 Answers 2

10

Probably a simple general formula for your matrix $C_n=(C_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$, where $\mathcal P_n$ denotes the partitions of $n$ (ordered in decreasing lexicographic ordering) does not exist. However a number of things can be said, notably your above guesses can be confirmed. One thing that your examples suggest but which is false is that the matrix is "upper-left triangular", which fails from $n=6$ on; the reason is that the true triangularity is given by $ C_{\lambda,\mu}\neq0\implies \lambda^{tr}\leq\mu$ in the dominance order where $\lambda^{tr}$ is the transpose (conjugate) partition of $\lambda$, and lexicographic order for $n\geq6$ does not assign complementary positions to $\lambda$ and $\lambda^{tr}$ (nor does any ordering for $n$ sufficiently large).

As you guessed it is easier to study the inverse matrix $C_n^{-1}=(M_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$, whose entry $M_{\lambda,\mu}$ gives the coefficient of $m_\mu$ in $e_\lambda$. This nonnegative number equals number of $0$-$1$ matrices with row sums $\lambda_1,\lambda_2,\ldots$ and column sums $\mu_1,\mu_2,\ldots$, and in particular $M$ is a symmetric matrix (Stanley EC2, Proposition 7.4.1, Corollary 7.4.2). The argument for this combinatorial interpretation is basically the balls-into-boxes you suggested: each term in $e_\lambda$ comes from choosing a monomial in each factor $e_{\lambda_i}$, and this choice can be represented by putting in row $i$ a $1$ (ball) in the selected columns and zeros elsewhere; the product monomial is found by summing up the columns (and by symmetry only those monomials with weakly decreasing exponents (column sums) need to be considered). Since $M_n$ is symmetric and $C_n$ is its inverse, an easy argument shows that $C_n$ is also symmetric.

You suggested that the sum of all entries of $C_n$ is $0$. This fails for $n=0,1$, but is true for all larger values, which can be seen as follows. By the definition of $C_n$, if one takes its column sums (equivalently, if one left-multiplies by $(1~1\cdots1)$), then one gets the coefficients the $e_\mu$ in $h_n=\sum_{\lambda\in\mathcal P_n}m_\lambda$, the $n$-th complete homogeneous symmetric function (sum of all distinct monomials of degree $n$). One would like to know the sum of those coefficients. Now the relation between elementary and complete homogeneous symmetric functions is given by the generating series identity $ (1-e_1X+e_2X^2-e_3X^3+\cdots)(1+h_1X+h_2X^2+\cdots)=1 $ or equivalently $\sum_{i=0}^n(-1)^ne_ih_{n-i}=0^n$. You can prove that the mentioned sum is $0$ for $n\geq2$ by induction from the latter equation. A more conceptual way is to use the generating series identity and the algebraic independence of the $e_i$ for $i\geq1$, which means there is a ring morphism sending all of them to $1$, and the obvious fact that $ (1-X+X^2-X^3+\cdots)^{-1}=1+X $ This shows that the mentioned morphism sends $h_0$ and $h_1$ to $1$ and all other $h_i$ to $0$; this value is precisely the sum of coefficients of all $e_\lambda$ we were after.

Finally for the computation of the matrix $C_n$, it also seems best to view it as the inverse of the matrix $(M_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$, which becomes unitriangular after permuting its columns according to the transposition of partitions. However $M_{\lambda,\mu}$ is best computed not by counting $0$-$1$ matrices (their numbers grow up to $n!$ in the bottom-right corner), but rather via $ M_{\lambda,\mu}= \sum_{\lambda\leq\nu\leq\mu^{tr}}K_{\nu,\lambda,}K_{\nu^{tr},\mu} $ where $K_{\nu,\lambda}$ designates a Kostka number, the number of semi-standard Young tableaux of shape $\nu$ and weight (content) $\lambda$. This identity is proved bijectively by the asymmetric RSK-correspondence, a bijective correspondence between $0$-$1$ matrices and pairs of semi-standard tableaux of transpose shapes, their weights being given respectively by the column sums and the row sums of the matrix. The Kostka numbers involved are generally quite a bit smaller than $M_{\lambda,\mu}$, and moreover there are ways to compute them without counting; one method is to interpret them as weight multiplicities for $GL_n$ and use a formula like Freudenthal's recurrence for them. (The LiE program which I maintain does so, and will do this computation easily; one can get results online up to $n=9$ from this site, if one takes into account the fact that a partition is represented by the sequence of differences of successive parts: $[4,2,2,1,0,0,0,0,0]$ is represented as $[2,0,1,1,0,0,0,0]$.)

One could either compute $M_{\lambda,\mu}$ this way and invert the result, or invert the matrix of Kostka numbers and deduce from above formula, which can be interpreted as a matrix product, the formula $ C_{\lambda,\mu}= \sum_{\lambda^{tr}\leq\nu\leq\mu}K'_{\lambda,\nu^{tr}}K'_{\mu,\nu} $ where $(K'_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$ is the inverse of the Kostka matrix $(K_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$. I don't know very much about these "inverse Kostka numbers", but you will find some information about them in the answers to this MO question; I'm not sure this allows them to be computed more efficiently than by inverting the Kostka matrix.

  • 0
    Maybe you could also answer this question: http://math.stackexchange.com/q/17891/193412012-02-02
5

Beside the very nice answer by Marc, I found this Introduction to Symmetric Functions by Mike Zabrocki:

$ e_\mu = \sum_\lambda B_{\lambda \mu} m_\lambda\tag{2.28} $ where $B_{\lambda \mu}$ is the number of matricies with entries in $\{0, 1\}$ whose column sum is $\mu$ and row sum is equal to $\lambda$. $\dots$ $B_{\lambda \mu}$is the number of ways of filling the the Young diagram for the partition with $\mu_1$ 1s, $\mu_2$ 2s, etc. that are strictly increasing in the rows and there is no restriction on the relationship between the values in the columns.

In a table summarising all possible transitions at the end of chapter 2.1 they also mention what I was previously looking for $ m_\mu = \sum_\lambda G_{\lambda \mu} e_\lambda, $ but leave it as an exercise to determine some sort of formula for these ($G_{\lambda \mu}$) coefficients.