2
$\begingroup$

I have a function $f$ defined on $]0,+\infty[\times[0,1] \rightarrow [0,1]$. For the moment let us say that it is smooth enough.

I am looking to find a minimum of this function. What I was told to do (but I do not think it is right, though I cannot find a counter example) is to:

Compute $\frac{\partial f}{\partial x}(x,y) = 0$, this gives me a unique $x_{\min} = h(y)$, then to study $g: y \mapsto f(h(y),y)$, and to see when it is minimum.

My questions:

  • Is it right?
  • If yes, do you know the theorem I should look for?
  • Otherwise, do you have a counter example (even if it means different hypothesis)?
  • If it is true with a function smooth enough, can you tell me the minimal hypothesis needed (and a minimal counter example)?

EDIT: I am especially interested in the minimal hypothesis that make this true, and a minimal counter example when those hypothesis are not matched (questions 3 and 4).

Thanks


Additional informations: there is $a,b>0$, $\frac{\partial f}{\partial y}(x,y) = a - bx$. (meaning the minimum seems to be necessarily for $y=0$ or $y=1$, depending on $x$ so here it works).

I know of the theorem stating that we should look for every point $(x_0,y_0)$ such that $\frac{\partial f}{\partial x}(x_0,y_0) =\frac{\partial f}{\partial y}(x_0,y_0) = 0$, but in this example we never have this condition for the second variable. Do you know of another theorem valid on a compact?

  • 0
    @Siminore, The point is that the minimum point will not be on the \emph{interior}, but with $y=0$ or $y=1$. And I do not know if my way is the way to study it formally.2012-06-25

2 Answers 2

1

How do you know that the equation $\frac{\partial f}{\partial x}(x,y) = 0$ has a unique solution for every $y$? I don't see how this follows from the assumptions.

But your additional remark $\frac{\partial f}{\partial y}(x,y) = a - bx$ gives a lot of information. Since the minimum is attained either at $y=0$ or $y=1$, you can simply investigate these two functions of variable $x$ and pick the lowest value overall. This looks much simpler than the method you were trying to use. Additionally, each function $x\mapsto f(x,0)$ and $x\mapsto f(x,1)$ only needs to be considered on a half-line (on the appropriate side of the point $a/b$).

  • 0
    Oups, yes indeed, this is a hidden assumption. And yes, I was going to use the method you describe if the first one made sense, it's just something the people I work with did and I first wanted to check if the first asumption was correct (we had a huge debate over this).2012-06-25
0

First, your method will find a minimum of $f$: Let $y_0 \in [0,1]$ with $g(y_0) = \min g$. For each $(x,y) \in \mathbb R_+ \times [0,1]$ we have \[ f(x,y) \ge f\bigl(h(y), y)\bigr) = g(y) \ge g(y_0) \] On the other hand for each $y \in [0,1]$ \[ g(y) = f\bigl(h(y),y\bigr) \ge \inf f \] So $\inf f = g(y_0)$ and $f$ has a minimum (which can be computed the way you say) if $h(y)$ is a minimizer of $f(\cdot, y)$ for each $y$ (you require it only to be a critical point, but your procedure seems to assume that it is a minimum).

The equations $\partial_x f = \partial_y f = 0$ will find the local extrema in the interior of the domain of $f$, i. e. in $(0,\infty) \times (0,1)$ you allways have to look at the boundary points after it (i. e. look at $f(\cdot, 0)$ and $f(\cdot, 1)$ on $[0,1]$). And you should find the same minimum. Formally taking $g$'s derivative, you get \[ g'(y) = \partial_x f(h(y),y)h'(y) + \partial_y f(h(y), y) = \partial_y f(h(y), y) \] so if $\partial_y f$ doesn't vanish in $(0,\infty) \times (0,1)$, $g$'s minimum will also be on the boundary, and so the $g$-approach yields the same minimum.

  • 0
    I was able to do these computations myself, however the point is that they do not convince me because I feel it misses some hypothesis, and I imagine there can be cases where it does not work. This was my question.2012-06-25