2
$\begingroup$

Given the polynomials $p_1(x_1, \ldots, x_n), \ldots,p_k(x_1, \ldots, x_n)$, let $\Lambda$ be the set of $x \in R^n$ where all of these polynomials are nonnegative. Does there exist a polynomial $q(x_1, \ldots, x_n)$ such that $\Lambda$ is the set of points where $q(x_1, \ldots, x_n)$ is nonnegative, i.e., $\Lambda = \{ x ~|~ q(x) \geq 0\}$?

  • 1
    Let $n=2$, let $p_1=x_1,p_2=x_2$. Is there a polynomial that is non-negative precisely on the (closed) first quadrant?2011-09-08

1 Answers 1

4

Not necessarily.

Consider the special case $p_1(x,y)=x$, $p_2(x,y)=y$. Then $q(x,y)$ should be nonnegative exactly when $x\ge 0$ and $y\ge 0$. The positive x axis is on the boundary of this set, so by continuity $q(x,0)=0$ for $x\ge 0$.

Now consider the function $x\mapsto q(x,0)$. This is a polynomial in one variable (it arises by dropping all terms with positive exponent in $y$). We've concluded that it must be identically zero for $x\ge 0$. But that must mean that it is the zero polynomial, because a nonzero polynomial in one variable cannot have more zeroes than its degree.

So in particular, for example, $q(-1,0)=0$. But that contradicts the assumption that $q(x,y)\ge0$ only for $x\ge 0$.

  • 0
    The fact that $q(x,0) = 0$ does not in itself imply that the entire $x$ axis is in the boundary of $\{(x,y): q(x,y) \ge 0\}$: maybe $q(x,y) < 0$ on both sides of $(x,0)$ if $x < 0$. To rule this out, you have to look at some partial derivatives. Let $k$ be the least positive integer such that $\dfrac{\partial^k q}{\partial y^k}(x,0) \ne 0$ for almost all $x$. Since $p(x,y)$ changes sign as you cross the $x$ axis for $x > 0$, $k$ must be odd. But since $p(x,y)$ does not change sign as you cross the $x$ axis for $x < 0$, $k$ must be even, contradiction!2011-09-08
  • 1
    I don't need the entire x axis to be on the _boundary_ specifically. Just plainly that if, say, $q(-1,0)=0$, then _ipso facto_ $(-1,0) \in \{(x,y)\mid q(x,y)\ge 0\}$, contradicting the assumption that this was the case only for the closed first quadrant.2011-09-08
  • 0
    Thank you very much for your answer. However, I wonder if you could explain the proof to me at my level (I am someone who does not know any algebra). You seem to be using the fact that if $a(x)$ and $b(x)$ are polynomials with infinitely many zeros in common, one must divide the other. Might I trouble you for a pointer to the proof?2011-09-08
  • 0
    Ah, of course. Silly of me.2011-09-08
  • 0
    @robinson, the infinitely-many-common-zeroes property holds only if the smaller polynomial is _irreducible_, that is, if it is not the product of two nonconstant polynomials. (For example $x^2y$ and $xy^2$ have infinitely many common zeroes, but neither is a factor of the other, and neither is irreducible). Such things are studied in _algebraic geometry_. I'll see if I can think up an elementary proof, but don't hold your breath.2011-09-08
  • 0
    I have replaced the last half of my argument with a more elementary one. (It used to appeal to general facts about divisibility of multivariate polynomials).2011-09-08
  • 1
    My comment would be relevant if you were talking about $\{(x,y): q(x,y) > 0\}$.2011-09-08