17
$\begingroup$

Let $s(m,n)$ be the Stirling Numbers of the first kind, $S(m,n)$ be the Stirling Numbers of the second kind.

The matrices $$\mathcal{S}_N := (S(m,n))_{N \geq m,n \geq 0} \text { and } \mathcal{s}_N := (s(m,n))_{N \geq m,n \geq 0}$$ are inverse to each other.

This is a definition in our lecture notes. But why does this apply?

Wikipedia says:

$$\sum_{n=0}^{\max\{j,k\}} (-1)^{n-k} \left[{n\atop j}\right] \left\{{k\atop n}\right\} = \delta_{jk} = \sum_{n=0}^{\max\{j,k\}} (-1)^{n-k} \left\{{n\atop j}\right\} \left[{k\atop n}\right]$$

where $\delta_{jk}$ is the Kronecker delta. But this is neither an explanation (I understand) nor did our lecture cover that "Kronecker Delta".

Could anyone please explain to me?

Thank you in advance!

  • 7
    The two matrices are the change-of-basis matrices between the basis $\{ 1, x, x^2, ... \}$ of the space of polynomials and the basis $\{ 1, x, x(x-1), x(x-1)(x-2)... \}$ of the space of polynomials. If you know the definitions, it shouldn't be hard to work out the details from here.2011-05-29
  • 0
    @qiaochu-yuan This is another defition I found. But I still don't get how this leads me to the fact that the matrices are the inverse of each other. Could you please give me another hint?2011-05-29
  • 0
    Kronecker delta ... $\delta_{jk}=1$ if $j=k$ and $\delta_{jk} = 0$ otherwise. So this is the entry in row $j$ column $k$ of the identity matrix.2011-05-29
  • 1
    if one matrix lets you change from basis $B_1$ to $B_2$, and another lets you change from $B_2$ to $B_1$, then their composition in either order is the identity.2011-05-29

2 Answers 2

12

The fact that the two Stirling matrices are inverses is a special case of the fact that certain matrices consisting of elementary and complete symmetric polynomials are inverses.

Define infinite lower triangular matrices $F$ and $G$ consisting of elementary and complete symmetric polynomials, respectively, via
$$(F(z_1, z_2, \ldots ))_{ij} = e_{i-j}(z_1, z_2, \ldots, z_{i-1}),$$ $$(G(z_1, z_2, \ldots ))_{ij} = h_{i-j}(z_1, z_2, \ldots, z_j).$$

For example, if $[F]_n$ and $[G]_n$ denote the $n \times n$ first principal minors of $F$ and $G$, respectively, then $$[F(x,y,z)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ x & 1 & 0 & 0 \\ xy & x+y & 1 & 0 \\ xyz & xy+xz+yz & x+y+z & 1 \end{bmatrix}$$ and $$ [G(x,y,z)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ x & 1 & 0 & 0 \\ x^2 & x+y & 1 & 0 \\ x^3 & x^2 + xy + y^2 & x+y+z & 1 \end{bmatrix}.$$

Then $F(1,1,\ldots)$ is the infinite Pascal matrix (consisting of the binomial coefficients), and $F(-1,-2,-3,\ldots)$ is the infinite Stirling matrix of the first kind, so that $$[F(1,1,\ldots)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 1 & 3 & 3 & 1 \end{bmatrix}, \text{ and } [F(-1,-2,-3,\ldots)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 2 & -3 & 1 & 0 \\ -6 & 11 & -6 & 1 \end{bmatrix}.$$

Moreover, $G(1,1,\ldots)$ is also the infinite Pascal matrix, and $G(1,2,3,\ldots)$ is the infinite Stirling matrix of the second kind, so that, for example, $$[G(1,1,\ldots)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 1 & 3 & 3 & 1 \end{bmatrix}, \text{ and } [G(1,2,3,\ldots)]_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 3 & 1 & 0 \\ 1 & 7 & 6 & 1 \end{bmatrix}.$$

In "Symmetric Polynomials, Pascal Matrices, and Stirling Matrices" (Linear Algebra and Its Applications, 428 (4): 1127-1134, 2008) Andy Zimmer and I prove that $$F(z_1,z_2,\ldots) G(-z_1, -z_2, \ldots) = I.$$

Thus the two Stirling matrices are inverse, and (up to sign) the Pascal matrix is its own inverse.

(The fact that the binomial coefficients and the Stirling numbers can be expressed in terms of symmetric polynomials is known; for references, see the paper.)

16

By definition (at least the one I'm used to), the Stirling numbers of the first kind $S_1(n,k)$ satisfy $$ (x)_n = \sum_{k=0}^n S_1(n,k) x^k, $$ and the Stirling numbers of the second kind $S_2(n,k)$ satisfy $$ x^n = \sum_{k=0}^n S_2(n,k) (x)_k $$ where $(x)_n=x(x-1)\cdots(x-n+1)$ is the falling factorial (with $(x)_0=1$). Combining these yields $$ x^n = \sum_{k=0}^n \sum_{l=0}^k S_2(n,k) S_1(k,l) x^l $$ $$ = \sum_{l=0}^n x^l \left( \sum_{k=l}^n S_2(n,k) S_1(k,l) \right). $$ Comparing powers of $x$, we see that $$ \sum_{k=l}^n S_2(n,k) S_1(k,l) = \left\{ \begin{array}{lr} 1 & \mathrm{ if}\;\; l=n \\ 0 & \mathrm{ if}\;\; l\neq n \end{array} \right.$$ $$ = \delta_{ln}.$$

If we define $S_{\nu}(a,b)=0$ for $a

[Edit] Another way (less computation, slightly more hand-wavy) to see this is that $S_N$ is the change of basis matrix (for the space of polynomials) from $\{1,x,x^2,\dots\}$ to $\{(x)_0, (x)_1, \dots\}$, and $s_N$ is the change of basis matrix going the other way. Hence the linear transformation $S_Ns_N$ takes the coefficients in terms of $\{x^i\}$ to coefficients in terms of $\{(x)_i\}$ and then back to $\{x^i\}$, i.e. it's the identity.