1
$\begingroup$

This question is related to this one. Given two functions $f,g\colon \mathbb{R}^n \to \mathbb{R}$, and given that at a certain point $x_0 \in \mathbb{R}^n$ we have $f(x_0) \geq g(x_0)$, and given an open subset $A \subseteq \mathbb{R}^n$ containing $x_0$, on what conditions on the partial derivatives of $f$ and $g$ on $A$ can I assure that $f(x) \geq g(x)$ for every $x \in A$? When is this inequality strict?

This question came when I was trying to prove that for all $(x,y) \in \mathbb{R}^2$ and $(x,y) \neq (0,0)$, it holds that $x^2 + y^2 > xy$. Somehow, it seemed visually obvious that the function $x^2 + y^2$ grew faster than $xy$, and so they could never catch up again after the origin, but I found difficulties when trying to formalize this thought.

Also, is there a name for this result/test/condition, even if only on the one variable case?

  • 0
    My roommate just pointed out: the case where $xy < 0$ is trivial. For the other one, wlog, suppose $|x| \geq |y|$. Then, $x^2 \geq xy$, from which the inequality follows. I'm still curious about the derivative test, though.2017-02-09
  • 0
    Cant you simply use AM GM Inequality? Or more simply $ (x-y)^2$ is non negative2017-06-14
  • 0
    @GautamShenoy That would work for part 2, but doesn't answer part 1 :(2017-06-14
  • 1
    For part 2, note that $$x^2+y^2=\frac{x^2+y^2}2+\frac{(x-y)^2}2+xy\ge xy$$2017-06-16

2 Answers 2

5

You can do something if $h(x,y)=f(x,y)-g(x,y)$ has a minimum. You have to consider up to second order partial derivatives. Then you run the "Second partial derivative test" https://en.wikipedia.org/wiki/Second_partial_derivative_test on $h(x,y)$, and if it is a minimum, then $h(x,y)=f(x,y)-g(x,y)\ge 0$ in $A$.

In your example

$h(x,y)=x^2+y^2-xy$,

$h_x(0,0)=0=h_y(0,0)$,

$D(x,y)=h_{xx}(0,0)h_{yy}(0,0)-h_{xy}(0,0)^2=3>0$ and

$h_{xx}(0,0)=2>0$,

so, according to the test, $h$ has a minimum.

  • 1
    That's true, but I was thinking more along the lines of a generalization of "if $f(x_0)\ge g(x_0)$ and $f'(x>x_0)>g'(x>x_0),f'(x2017-06-14
  • 1
    That would be $D_{(x,y)-(x_0,y_0)}f(x,y)>D_{(x,y)-(x_0,y_0)}g(x,y)$ where $D_vf(x,y)=\nabla f(x,y)\cdot v$. But it is a lot messier. Moreover, in the example you obtain $D_{(x,y)-(x_0,y_0)}f(x,y)=2x^2+2y^2$ and $D_{(x,y)-(x_0,y_0)}g(x,y)=2xy$, so you are back at the original inequality.2017-06-14
1

While dodging your actual question, I will provide you this explanation to prove that $x^2+y^2\ge xy$.

We know that the root-mean square is bigger than or equal to geometric mean. $$\sqrt{\frac{x^2+y^2}{2}} \ge \sqrt{xy}$$ Clearly, $$x^2+y^2\ge 2xy > xy$$ assuming $x$ and $y$ are positive.

To give a small idea for your question, I can say if $f$ is bigger than $g$ at a point and then if the derivative of $f$ is bigger than derivative of $g$ everywhere (actually, after that point is enough), then you can conclude $f$ is bigger than $g$ everywhere after that point.

  • 0
    Are you sure about your last statement ? Let $f(x)=x,g(x)=x/2$, then $f'>g'$ and $f(1)>g(1)$ but $f(x<0)2017-06-14
  • 0
    Because you started comparing them at $1$ and ensured that $f$ is bigger but at the end compared them for numbers smaller than $1$. Notice the last words: everywhere AFTER that point. I meant right side by "after". Sorry for lack of clarification2017-06-14