3
$\begingroup$

I'm working through my notes and I'm stuck in the middle of the proof of Hilbert's Nullstellensatz.

(Hilbert's Nullstellensatz) Let $k$ be an algebraically closed field. Let $J$ be an ideal in $k[x_1, \dots , x_n]$. Let $V(J)$ denote the set of $x$ in $k$ such that all $f$ in $J$ vanish on them. Let $U \subset k^n$ and let $I(U)$ denote the set of $f$ in $k[x_1, \dots , x_n]$ that vanish on $U$. Then $$ r(J) = I(V(J))$$ where $r$ denotes the radical of $J$.


Let me go through the proof as far as I understand it:

$\subset$: Easy. Let $p \in r(J)$. Then $p^k \in J$ which means $p(x)^k = 0$ for all $x$ in $V(J)$. Hence $p(x) = 0$ for $x$ in $V(J)$ hence $p \in I(V(J))$.

$\supset$: Assume $f \notin r(J)$. Then for all $k>0$, $f^k \notin J$. We know that there exists a prime ideal $p$ such that $J \subset p$ and $f^k \notin p$ for all $k>0$. To see this we use the same argument used in the proof of proposition 1.8. on page 5 in Atiyah-MacDonald: Let $\Sigma$ be the set of all ideals that do not contain any power of $f$. We order $\Sigma$ by inclusion and use Zorn's lemma to get a maximal element $p$. We claim $p$ is prime. Assume neither $x \notin p$ nor $y \notin p$ (then we want to show $xy \notin p$). Then $p + (x), p + (y)$ are ideals properly containing $p$ hence neither of them is in $\Sigma$ hence $f^n \in p + (x)$ and $f^m \in p + (y)$. Now $f^{n+m} \in (p + (x)) (p + (y)) = p^2 + (x)\cdot p + (y)\cdot p + (xy) \subset p + (xy)$ so $p + (xy) \notin \Sigma$. Hence $p$ is properly contained in $p + (xy)$ hence $xy$ cannot lie in $p$.

So we have $p$ is a prime ideal containing $J$. Now consider the map $$ k[x_1, \dots, x_n] \xrightarrow{\pi_1} k[x_1, \dots, x_n]/p \xrightarrow{i} (k[x_1, \dots, x_n]/p) [\overline{f}^{-1}] =: B([\overline{f}^{-1}]) \xrightarrow{\pi_2} B[\overline{f}^{-1}] /m$$

where $\overline{f}$ denotes $\pi_1 (f)$ and $m$ is some maximal ideal in $B[\overline{f}^{-1}]$. We may assume $f \neq 0$ so that $\overline{f} \neq \overline{0}$. $\overline{f}^{-1}$ is an element of the field of fractions of $B$ so we may adjoin it to $B$ to get a new ring.

Since we only adjoined one element and otherwise only took quotients, the thing coming out on the RHS is a finitely generated $k$-algebra (because $ k[x_1, \dots, x_n]$ is).

Now by theorem 5.24 in Atiyah-MacDonald we know that $k \cong B[\overline{f}^{-1}] /m$.

The proof now finishes as follows: "Let $t_1, \dots, t_n$ denote the images of $x_1 , \dots, x_n$ under this composite ring homomorphism.

(*)By construction, $g \in J \implies g \in p \implies g(t_1, \dots, t_n) = 0 \implies (t_1 , \dots, t_n ) \in V(J)$.

(**)On the other hand, $f(t_1 , \dots, t_n )$ is precisely the image of $f$ in $B[\overline{f}^{-1}] /m$, which is a unit. $\implies f(t_1 , \dots, t_n ) \neq 0 \implies f \notin I(V(J))$."

Question 1: What is the line (*) showing? I think we want to show $f \notin I(V(J))$, where does this come in here?

Question 2: Why is $f(t_1 , \dots, t_n )$ a unit?

Thank you for your help.

  • 1
    $f(t_1,\dots,t_n)$ is a unit because $f$ is invertible in $B[f^{-1}]$ and remains so in the homomorphic image $B[f^{-1}]/m$, but the image of $f$ is precisely $f(t_1,\dots,t_n)$. The point of ($*$) is that there is one distinguished point $(t_1,\dots,t_n)$ in $B[f^{-1}]/m\times\cdots\times B[f^{-1}]/m\simeq k^n$ that kills all the polynomials in $J$, and thus all the polynomials in $I(V(J))$, so it should also kill $f$ (by definition). Since it doesn't by ($**$), this is the desired contradiction.2012-07-19
  • 0
    @OlivierBégassat Oh, of course! Thank you. I confused myself. It's $\overline{f}(t_1, \dots , t_n)$ really, and it's a unit because we *made* it a unit by adjoining $\bar{f}^{-1}$. Then $\pi_2$ is a ring homo hence maps units to units, et voilà.2012-07-19
  • 0
    @OlivierBégassat As for question 2: I'm not sure I understand. We don't want a contradiction. We assumed $f \notin r(J)$ and we want to show $f \notin I(V(J))$.2012-07-19
  • 0
    I don't think so: you assume $f\in I(V(J))$, and want to show that this implies $f\in\sqrt{J}=r(J)$. To do so, you reason by contradiction: suppose there was $f\in I(V(J))$ not in $r(J)$, then... The conclusion is that $f$ doesn't kill $(t_1,\dots,t_n)\in V(J)$ (by ($*$)) which contradicts the assumption $f\in I(V(J))$.2012-07-19
  • 0
    @OlivierBégassat But the lecturer started the proof by writing "Assume $f \notin r(J)$...." Then he goes on to showing that if we assume that, we get a prime ideal which we can use to construct a composite quotient map. And at the end he concludes the proof with $\implies f \notin I(V(J))$.2012-07-19
  • 0
    @OlivierBégassat Hm... let me re-read it.2012-07-19
  • 0
    both are the same, either you reason by contradiction as I understood it, and you get a contradiction, or you reason the same way and get $f\notin I(V(J))$. It's the same thing.2012-07-19
  • 0
    @OlivierBégassat Then for the direct I don't need (*) at all? I think if I do it all makes sense to me. But then (**) would have been written for no reason.2012-07-19
  • 0
    @OlivierBégassat If the answer is yes then I suggest you post this as an answer then I can upvote and accept you.2012-07-19

1 Answers 1

2

This is a comment made into an answer. Suppose $f$ is any polynomial not in $r(J)$.

Question 2 $f(t_1,\dots,t_n)$ is a unit because $f$ is (by construction) invertible in $B[f^{-1}]$ and remains so in the homomorphic image $B[f^{-1}]/m$. But the image of $f$ is precisely $f(t_1,\dots,t_n)$.

Question 1 The point of $(*)$ is that there is one distinguished point $(t_1,\dots,t_n)$ in $B[f^{-1}]/m×⋯×B[f^{-1}]/m≃k^n$ that kills all the polynomials in $J$. Put another way, $$\mathrm{the~point~ of~}(*)\mathrm{~ is~ that~}(t_1,\dots,t_n)\in V(J)$$ and thus all the polynomials in $I(V(J))$ must be killed by it. By $(**)$, $f$ does not take this point to $0$ and so $$\mathrm{the~point~ of~}(**)\mathrm{~ is~ that~}f\notin I(V(J)).$$ By now the authors have shown that $f\notin r(J)\Rightarrow f\notin I(V(J))$ i.e. $k[x_1,\dots,x_n]\setminus r(J)\subset k[x_1,\dots,x_n]\setminus I(V(J))$ i.e. $$I(V(J))\subset r(J).$$

  • 0
    But one can prove it directly (without contradiction) and then one does not need (*). Right? (I'd written this in a comment to the question but I made some unfortunate typos)2012-07-19
  • 1
    I haven't used contradiction this time, I think both points are essential. Without the first point $(*)$, I wouldn't know how t deduce the second one...2012-07-19
  • 0
    I think I was confused because I thought that $x_i$ are variables (namely, they take values in $k$). But they are elements of the ring $k[x_1, \dots, x_n]$ and hence the image of $(x_1, \dots , x_n)$ is one point in $k^n$. Does that make sense?2012-07-19
  • 1
    Absolutely, the $x_i$ are polynomials! And the fact that $f$ is sent to $f(t_1,\dots,t_n)$ depends on this! Indeed, $f$ is an algebraic expression in the elements $x_1,\dots,x_n$, maybe $x_1^4(x_2-5x_3^{17})+1-x_7^3$, and thus its image under the ring homomorphism $B[f^{-1}]\rightarrow B[f^{-1}]/m$ is the corresponding algebraic expression in the images of $x_1,\dots,x_n$ i.e. the $t_1,\dots,t_n\in (B[f^{-1}]/m\simeq)k$, so with our example $t_1^4(t_2-5t_3^{17})+1-t_7^3$.2012-07-19
  • 0
    Dear Olivier, thank you for your patience, I think I finally understand it.2012-07-19
  • 1
    It's always a pleasure :D2012-07-19