Suppose there is a function $f: X \rightarrow \mathbb{R}$, where $X \subseteq \mathbb{R}^n$.
If $x^*$ is a local minimizer of $f$ over $X$, must $x^*$ be either of the two cases:
- if $f$ is differentiable at $x^*$, then $x^*$ must be a stationary point i.e. $\nabla{f}(x^*)=0$
- $f$ is non-differentiable at $x^*$?
In other words, are there other possible cases for a local minimizer besides the two? I seem to have seen this as a conclusion from somewhere that I now cannot recall. However the following proposition seems to challenge this.
From p194 on Nonlinear Programming by Dimitri P. Bertsekas:
If $X$ is a convex set.
Proposition 2.1.2: (Optimality Condition)
(a) If $x^*$ is a local minimum of $f$ over $X$. then \nabla{f}(x^*)'(x - x^*) > 0, \forall x \in X.
(b) If $f$ is convex over $X$, then the necessary condition of part (a) is also sufficient for $x^*$ to minimize $f$ over $X$.
If the conclusion in Part 1 is true, then if $f$ is differentiable at $x^*$, we will have $ \nabla{f}(x^*)=0$. Under this logic, I was confused why we have the proposition (a) as above in the book?
How to interpret the proposition geometrically?
Thanks and regards!