14
$\begingroup$

I am trying to find an elementary proof that if $\phi$ is a linear map from $\mathbb{Z}^n\rightarrow \mathbb{Z}^m$ represented by an $m \times n$ matrix $A$, then the map is surjective iff the gcd of the determinants of all the $m\times m$ minors of $A$ is $1$.

I know that for there to be surjectivity between $\mathbb{Z}^n$ and $\mathbb{Z}^m$ $n$ must be greater than or equal to $ m$ and for there to even be $m \times m$ minors $n$ again must be greater than or equal to $ m$, so I just assume this throughout.

I sort of have one direction $\Leftarrow$

i) Greatest Common Divisor =1 implies surjectivity: First observe that the if $ | \mathbb{Z}^m/ Im(M)| < \infty$ then $|\det M| = | \mathbb{Z}^m/ Im(M) |$ otherwise $\det(M) = 0$ where $M$ is an $m\times m$ matrix. We can consider the $n$ columns of $A$ as column vectors $v_1, v_2, \ldots, v_n$. These $n$ column vectors live in $\mathbb{Z}^m$. Let $S'' = \{ v_i\}$ and then let $S'$ be subsets of $S''$ of cardinality $m$ and lastly let $S$ be the elements of $S'$ such that when the $m$ $v_i$ vectors are considered as $m\times m$ matrices, the determinant is not zero, thus $S$ consists of all $m\times m$ minors of $A$ with non-zero determinant (we ignore zeroes since they do not affect gcd). For each $s\in S$ define a map $i_s: \mathbb{Z}^m \rightarrow \mathbb{Z}^n$ that maps the standard basis of $\mathbb{Z}^m$ to the basis elements $e_k \mathbb{Z}^n$ such that $v_k \in s$. That is, $\phi \circ i_s$ gives the matrix created by the column vectors of $s$. Let $\Lambda$ be the lattice Im$\phi \supset \sum_{s\in S}$ Im $\phi\circ i_s =\sum_{s \in S} \Lambda_s$. Thus $\forall s \in S$, $\Lambda_s \subset \Lambda \subset \mathbb{Z}^m$. Thinking in terms of group theory, we have that $\Lambda$ is a subgroup of $\mathbb{Z}^m$ and all the $\Lambda_s$ are subgroups of $\Lambda$. Thus by Lagrange's Theorem, we have $|\mathbb{Z}/\Lambda| \Big\vert |\mathbb{Z}^m/\Lambda_s|$ Since $|\mathbb{Z}^m/\Lambda_s|$ are the determininants of the $m\times m$ minors and the definition of the common divisor of several integers is the greatest positive integer dividing all of them. Thus by hypothesis $|\mathbb{Z}/\Lambda| \leq 1$ and so $|\mathbb{Z}/\Lambda| =1$ and we have that Im$A=\Lambda = \mathbb{Z}^m$ so the map is surjective.

I was hoping to get a more elementary proof that doesn't rely on the observation that the if $ | \mathbb{Z}^m/ Im(M)| < \infty$ then $|\det M| = | \mathbb{Z}^m/ Im(M) |$ otherwise $\det(M) = 0$ where $M$ is an $m\times m$ matrix or normal forms.

Thanks!

  • 0
    I am hoping for something just uses properties of determinants and the definition of surjective that the columns of the matrix generated all of $\mathbb{Z}^m$.2012-04-16

2 Answers 2

9

Let $R$ be a commutative ring, and $f : R^n \to R^m$ be a homomorphism of $R$-modules with corresponding $m \times n$ matrix $A$ over $R$. Let $Y_1,\dotsc,Y_v$ be the $m \times m$ submatrices of $A$. Then $f$ is surjective iff the $\mathrm{det}(Y_1),\dotsc,\mathrm{det}(Y_v)$ generate the unit ideal of $R$.

Proof (which I learned from Darij Grinberg): Assume that $f$ is surjective. Then there is some $n \times m$ matrix $B$ with $AB=1_m$. Let $Z_1,\dotsc,Z_v$ denote the $m \times m$ submatrices of $B$. Then the Cauchy-Binet formula (which has a nice graph theoretic proof) implies $1=\mathrm{det}(AB)=\sum_{s=1}^{v} \mathrm{det}(Z_s) \mathrm{det}(Y_s)$.

Conversely, assume $\sum_s \lambda_s \mathrm{det}(Y_s)=1$ for some $\lambda_s \in R$. Let $B_s$ denote the $n \times m$ matrix, which is built up out of $\mathrm{adj}(Y_s)$ and with zero columns which were deleted in $A \mapsto Y_s$. Let $B = \sum_{s=1}^{v} \lambda_s B_s$. Then we have

$$AB = \sum_s \lambda_s A B_s = \sum_s \lambda_s Y_s \mathrm{adj}(Y_s) = \sum_s \lambda_s \mathrm{det}(Y_s) 1 = 1.$$

Remark: One can show that $f$ is injective iff $(\mathrm{det}(Y_1),\dotsc,\mathrm{det}(Y_s))$ is a regular ideal.

  • 0
    So for the conclusion of the first you used a generalization of Bezout's identity? I am not really sure I understand your proof in the other direction.2012-04-17
  • 0
    I don't need Bezout. Please ask specific questions about the proof, then I can answer them. Also observe that the statement holds in every commutative ring - it doesn't matter which one. But then we have to replace "gcd = 1" with "generate unit ideal" (which is equivalent at least in Bezout rings, such as PIDs).2012-04-18
  • 0
    You're right you don't need the Bezout identity because you showed that there is a linear combination $\det(Y_i)$ that is equal to the unit and therefore generates the unit ideal. However, I am not sure how you got that all $\det(Z_s)$ would be in $R$ in the first place. Wouldn't it be possible the entries of the right inverse live in the field of fractions of $R$ rather than $R$. Or is it when you take the determinant they are back in $R$?2012-04-18
  • 0
    We don't leave $R$ at all and we don't need inverses.2012-04-21
  • 0
    @MartinBrandenburg Concerning the remark: what do you mean by a vector in $R^s$ to be regular? (I guess you wanted to say that the ideal is regular, that is, contains a nonzerodivisor.)2013-01-15
  • 0
    @YACP: Yes, he meant that the ideal is regular, not that the vector is regular. (The $\left(...\right)$ notation for ideals is bound to cause such misunderstandings at some point.) But I don't think that an ideal is regular if and only if it contains a nonzerodivisor. An ideal $I$ is regular if and only if every $a\in R$ satisfying $aI = 0$ must be $0$ itself. Given such an ideal, how do you construct a nonzerodivisor in it?2013-09-03
  • 0
    @darijgrinberg http://en.wikipedia.org/wiki/Regular_ideal2013-09-03
  • 0
    OK, but the correct condition is "every $a\in R$ such that ($a\det(Y_j) = 0$ for all $j$) must satisfy $a=0$".2013-09-03
  • 0
    @MartinBrandenburg This seems related; any ideas? http://math.stackexchange.com/questions/14126492016-08-04
7

I'm not sure what kind of machinery you're willing to use. The following proof is short but sophisticated.

First, recall the three types of integer row operations:

  1. Negating a row,

  2. Switching two rows,

  3. Adding an integer multiple of one row to another.

Each of these operations corresponds to a change of basis in the codomain of $\phi$. Similarly, integer column operations correspond to a change of basis in the domain of $\phi$.

Using integer row operations and integer column operations, any integer matrix can be reduced to Smith normal form.

Now, here is the proof:

  1. Observe that the gcd of the determinants of the $m\times m$ minors is unaffected by integer row and column operations. (In particular, a type 1 column operation will negate some of the determinants, a type 2 column operation will switch certain pairs of determinants and negate others, and a type 3 column operation will add an integer multiple of certain determinants to other determinants.)

  2. Therefore it suffices to prove the statement in the case where $A$ is in Smith normal form. Such a matrix has only one $m\times m$ minor with nonzero determinant, and this determinant is $1$ if and only if $\phi$ is onto.

  • 0
    Thank you, but alas I'm looking for something that doesn't use this diagonalizing method.2012-04-16
  • 0
    +1 for simplicity. Your post helped me prove forward direction (I haven't done backwards yet)2018-03-21