0
$\begingroup$

Let $f:[0,\infty)^n\to \mathbb{R}$ be a function continuously differentiable in the interior $(0,\infty)^n$ and that $\frac{\partial}{\partial x_j}f(\textbf{x})\to -\infty$ as $x_j\to 0^+$ for $j=1,\dots,n$.

Can it be shown rigorously that when this function is minimized over a set determined by a linear equation say $\{\textbf{x}=(x_1,\dots,x_n):\sum_j a_j x_j=b, x_j\ge 0\}$, the minimizer doesn't have a $0$ entry at a position when the constraint set allows non zero entries for that position?

Thanks.

  • 1
    The assumptions are contradictory: A continuously differentiable function would be differentiable at $x_j=0$, with $$ \left[\frac{\partial}{\partial x_j}f(\textbf{x})\right]_{x_j=0}=\lim_{x_j\to0^+}\frac{\partial}{\partial x_j}f(\textbf{x})\;, $$ but this limit doesn't exist by assumption.2012-09-20
  • 0
    @joriki: Yes, you are right. I should have said "continuously differentiable in the interior and that $\lim_{x_j\to0^+}\frac{\partial}{\partial x_j}f(\textbf{x})=-\infty$". I will edit accordingly. Thank you.2012-09-20

1 Answers 1

2

This is false. A counterexample is given by

$$f(x,y)=\sqrt[4]{(x-1)^2+y^2}-1.1\sqrt y$$

with the linear constraint $x+y=1$. The partial derivative with respect to $y$ is

$$ \frac{\partial f}{\partial y} = \frac y{2\left((x-1)^2+y^2\right)^{3/4}}-\frac{1.1}{2\sqrt y}\;, $$

which goes to $-\infty$ as $y\to0$ for fixed $x$ (including $x=1$). On the line $x+y=1$, we have $x=1-y$ and

$$ f(1-y,y)=\sqrt[4]{y^2+y^2}-1.1\sqrt y=\left(\sqrt[4]2-1.1\right)\sqrt y\;, $$

which is minimal for $y=0$.

  • 0
    Yes your counterexample disproves my claim. Can you say something on what additional condition on $f$ will make my claim true? One thing I forgot to mention is that my $f$ is non negative and quasi convex. I don't know whether these are relevent.2012-09-20
  • 0
    I doubt that these two are relevant -- I think my function is quasi-convex (at least some plots by Wolfram|Alpha suggest that it is), and it could easily be made non-negative without destroying the minimum at $y=0$ (though I don't know if this could be done while keeping it quasi-convex). I think the condition that the partial derivatives parallel to the boundary must be finite should suffice; my counterexample uses an infinite partial derivative with respect to $x$ to counter the infinite partial derivative with respect to $y$.2012-09-20
  • 0
    I couldn't understand your last statement ...I think the condition that the partial derivatives parallel to the boundary must be finite should suffice; my counterexample uses an infinite partial derivative with respect to x to counter the infinite partial derivative with respect to y...Could you please elaborate?2012-09-20
  • 0
    @Kumara: In my counterexample, the partial derivative with respect to $x$ diverges at $(1,0)$. This is what allows the function value to increase along the line $x+y=1$ even though it decreases infinitely fast along the line $x=1$. If the partial derivative with respect to $x$ at that point where finite, the infinite decrease with respect to $y$ would "win".2012-09-20
  • 0
    I am still not very clear about this. Could you please state the sufficient conditions for the minimizer to be in the relative interior of the set?2012-10-19
  • 0
    @Kumara: I haven't proved any sufficient conditions, but my intuition would be that the minimum is in the interior if the partial derivatives parallel to the boundary have a finite limit as the boundary is approached anywhere.2012-10-19
  • 0
    One final question. I want whatever you say in mathematical terms. When you say "the partial derivatives parallel to the boundary have a finite limit as the boundary is approached anywhere", do you mean that for each $j$, $\frac{\partial}{\partial x_j}f(\textbf{x})$ tends to a finite value when all other $x_i\to 0^+$, $i\neq j$? In addition to this we also want $\frac{\partial}{\partial x_j}f(\textbf{x})\to -\infty$ as $x_j\to 0^+$ for $j=1,\dots,n$ as a sufficient condition for the minimizer to lie in the relative interior. Am I right? I am asking your intution only. Thank you.2012-10-19
  • 0
    @Kumara: That's not the boundary, those are just the coordinate axes. It's sufficient for a single $x_i$ to go to $0$ to reach the boundary. What I mean is that $\frac\partial{\partial x_j}f(\mathbf x)$ tends to a finite value when any $x_i\to0^+$, $i\neq j$, with the remaining $x_k$, $k\ne i$ kept fixed. Or perhaps more clearly the other way around: When $x_i\to0^+$ with all $x_k$ with $k\ne i$ fixed, all partial derivatives $\frac\partial{\partial x_j}f(\mathbf x)$ with $j\ne i$ tend to a finite limit.2012-10-19
  • 0
    Please see my edited comment.2012-10-19
  • 0
    @Kumara: I don't see an edited comment by you (they're marked with a little pencil symbol, like my previous comment for instance), so I assume you mean your deleted and replaced comment -- I did see that; my comment refers to it.2012-10-19
  • 0
    @Kumara: Actually what I wrote won't be enough since the derivatives along the axis directions can all be zero yet the derivative along some other direction could diverge; I think it has to be required for all directions parallel to the boundary, not just along the coordinates.2012-10-19
  • 0
    Can you please put it in mathematical terms? I'm quite confused.2012-10-19
  • 0
    @Kumara: You seem to be using the expression "mathematical terms" in a way I don't understand. To me, all of what I wrote is in mathematical terms. Do you mean formulas?2012-10-19
  • 0
    yes, like you wrote in the 3rd comment from last.2012-10-19
  • 0
    @Kumara: In formulas, the condition I'm intuitively proposing is that if $n$ is any unit direction vector with $n_i=0$, then the directional derivative $\frac{\partial f}{\partial n}$ should tend to a finite limit whenever $x_i\to0^+$ with all other $x_j$, $j\ne i$ kept fixed.2012-10-19
  • 0
    Thank you. I will ponder on this and I will try to prove it if possible. Thank you once again.2012-10-19
  • 0
    @Kumara: You're welcome.2012-10-19