3
$\begingroup$

$M_{[i],[j]}=(-1)^{i_1+i_2+\cdots+i_k+j_1+j_2+\cdots+j_k}$, where $1\le i_1 and $1\le j_1, can be taken to be an $\left(n\atop k\right)\times\left(n\atop k\right)$ dimensional square matrix, with indices $[i],[j]=[i_1,i_2,...,i_k],[j_1,j_2,...,j_k]$.

By experiment in Maple for specific low-ish dimensional cases, all its eigenvalues are zero except for one, which takes the value $\left(n\atop k\right)$. What theorem or lemma can I cite to this effect for the general case, and in what book or paper?

This emerged in some Fermionic quantum field computations. All I really care about is whether this matrix is positive semi-definite, which it pretty clearly is from an experimental mathematics point of view and because of the antisymmetry, so if citing for positive semi-definiteness is easier, please feel free to do that instead.

1 Answers 1

6

If we number the $[i_1,i_2,\ldots,i_k]$ index tupes from $1$ to $\binom{n}{k}$, then the important property is that there is a function $\sigma:\{1,2,\ldots,\binom{n}{k}\}\to\{-1,1\}$ such that $M_{ij}=\sigma(i)\sigma(j)$.

Every column of the matrix is then either equal to the first row or its negative, so the range of the associated linear transformation is one-dimensional and generated by the first column, $(\sigma(1),\sigma(2),\ldots,\sigma\binom{n}{k})$. The first column must therefore be an eigenvector; explicit computation easily finds that its eigenvalue equals the number of columns.

Every other (up to proportionality) eigenvector $v$ must have eigenvalue $0$ because $Mv$ is then in the intersection of the one-dimensional range and the subspace geneated by $v$, which is $\{0\}$.

  • 0
    Excellent, thanks.2011-11-05