2
$\begingroup$

A path diagram is a Ferrers diagram for a partition $\lambda$ in which the dots have been replaced with numbers computed as follows. The number in position $(i,j)$ the number of paths (going either right or up at each stage) from the bottom of column $j$ to the right end of row $i$, that stay inside the diagram. $M_\lambda$ is the matrix that appears in the Durfee square of the diagram.

Could you help me proving these two things (I think they are related, probably we have to use point 1 to prove point 2 remembering that the Catalan numbers have some connection with paths):

1) for every $\lambda$, $\det(M_\lambda)=1$

2) the Catalan numbers are the unique sequence $\langle a\rangle$ such that for all $n\ge 0$

$\det\begin{pmatrix} a_0&a_1&\cdots&a_n\\ a_1&a_2&\cdots&a_{n+1}\\ \vdots&\vdots&&\vdots\\ a_n&a_{n+1}&\cdots&a_{2n} \end{pmatrix}= \det\begin{pmatrix} a_1&a_2&\cdots&a_{n+1}\\ a_2&a_3&\cdots&a_{n+2}\\ \vdots&\vdots&&\vdots\\ a_{n+1}&a_{n+2}&\cdots&a_{2n+1} \end{pmatrix}=1$

2 Answers 2

1

As Dan Drake suggested, (1) can be proved using the Lindström-Gessel-Viennot lemma (LGV lemma). Start with a partition $\lambda$. Make its Ferrers diagram the vertices of an acyclic directed graph $G$ by adding edge from each vertex $v$ to the vertices (if any) immediately to the right of $v$ and immediately above $v$. Suppose that the Durfee square of $\lambda$ is $k\times k$. Take as sources the vertices at the bottoms of the first $k$ columns; I’ll call these $u_1,\dots,u_k$ from left to right. Take as sinks the vertices at the righthand ends of the first $k$ rows; I’ll call these $v_1,\dots,v_k$ from top to bottom. Assign to each edge of $G$ a weight of $1$, so that every path has weight $1$ as well. The $k\times k\,$ matrix $M$ formed by the Durfee square in the path diagram is by definition the path matrix corresponding to the sources and sinks.

Let $U=\{u_1,\dots,u_k\}$ and $V=\{v_1,\dots,v_k\}$; a path system from $U$ to $V$ is a permutation $\pi$ of $[k]$ and a set $\mathscr{P}=\{P_1,\dots,P_k\}$ of $k$ paths in $G$ such that $P_i$ is a path from $u_i$ to $v_{\pi(i)}$. The weight of $\mathscr{P}$, $w(\mathscr{P})$, is the product of the weights of the $P_i$, which in our case is $1$. The sign of $\mathscr{P}$, $\operatorname{sgn}(\mathscr{P})$, is defined to be the sign of the permutation $\pi$. $\mathscr{P}$ is vertex-disjoint if no two paths in $\mathscr{P}$ have a vertex in common; let $\mathfrak{V}$ be the set of vertex-disjoint path systems from $U$ to $V$. The LGV lemma says that $\det M=\sum_{\mathscr{P}\in\mathfrak{V}}\operatorname{sgn}(\mathscr{P})w(\mathscr{P})\;.\tag{1}$

The definition of the Durfee square ensures that there is only one vertex-disjoint path system in $G$: $\pi$ must be the identity permutation, and $P_i$ goes vertically from $u_i$ up to the vertex in row $i$ and then horizontally to $v_i$. Call this path system $\mathscr{P}_0$. Then $(1)$ reduces to $\det M=\operatorname{sgn}(\rm{id})w(\mathscr{P}_0)=1\,,$ the desired result.

I now show that (1) implies (2). From the familiar result that $C_n$ is the number of monotonic paths from $(0,0)$ to $(n,n)$ it’s easily seen that the path diagram $P_n$ for the partition with parts $1,2,\dots,n$ is:

$\begin{array}{ccccccc} C_{n-1}&C_{n-2}&C_{n-3}&\dots&2&1&1\\ C_{n-2}&C_{n-3}&C_{n-4}&\dots&1&1\\ C_{n-3}&C_{n-4}&C_{n-5}&\dots&1\\ \vdots&\vdots&\vdots\\ 2&1&1\\ 1&1\\ 1 \end{array}$

For example, $P_5$ and $P_6$ are

$\begin{array}{r} 14&5&2&1&1&&\text{ and }&&42&14&5&2&1&1\\ 5&2&1&1&&&&&14&5&2&1&1\\ 2&1&1&&&&&&5&2&1&1\\ 1&1&&&&&&&2&1&1\\ 1&&&&&&&&1&1\\ &&&&&&&&1 \end{array}$

and their Durfee squares are

$\begin{array}{r} 14&5&2&&\text{ and }&&42&14&5\\ 5&2&1&&&&14&5&2\\ 2&1&1&&&&5&2&1\\ \end{array}$

By (1) these have determinant $1$. Rotating them $180°$ doesn’t change their determinants and produces the $3\times 3$ case of (2) with $a_k=C_k$. In general the matrices in (2) are the $180°$ rotations of the Durfee squares of $P_{2n+1}$ and $P_{2n+2}$, so by (1) they do indeed have determinant $1$.

It remains to show that the Catalan numbers are the only solution to

$\det\begin{pmatrix} a_0&a_1&\cdots&a_n\\ a_1&a_2&\cdots&a_{n+1}\\ \vdots&\vdots&&\vdots\\ a_n&a_{n+1}&\cdots&a_{2n} \end{pmatrix}= \det\begin{pmatrix} a_1&a_2&\cdots&a_{n+1}\\ a_2&a_3&\cdots&a_{n+2}\\ \vdots&\vdots&&\vdots\\ a_{n+1}&a_{n+2}&\cdots&a_{2n+1} \end{pmatrix}=1\,.\tag{1}$

This can be done by induction on $n$. Clearly for $n=0$ we must have $a_0=1=C_0$ and $a_1=1=C_1$ in order to make the determinants $1$. Suppose that $n>0$ and we’ve shown that $a_k=C_k$ for $0\le k<2n$. The first matrix in $(1)$ is then

$\begin{pmatrix} C_0&C_1&\cdots&C_n\\ C_1&C_2&\cdots&C_{n+1}\\ \vdots&\vdots&&\vdots\\ C_n&C_{n+1}&\cdots&a_{2n} \end{pmatrix},$

and its determinant is of the form $\alpha+a_{2n}$, where $\alpha$ depends only on the $C_k$ with $k=0,\dots,2n-1$. (Expand by cofactors along the bottom row, noting that the cofactor of $a_{2n}$ is $1$ by the induction hypothesis.) The equation $\alpha+x=1$ has a unique solution, and we already know that $x=C_{2n}$ is a solution, so we must have $a_{2n}=C_{2n}$. We can now apply the same argument to the second matrix in $(1)$ to complete the induction step and the proof that (1) implies (2).

After (finally!) solving the problem I ran across this paper, which may be of interest to anyone wishing to play further with these ideas.

0

This reminds me of Gessel-Viennot stuff, where you evaluate the determinant by finding the number of nonintersecting path arrangements. It sure seems like there will be only one such arrangement for your situation.