I haven't understood this theorem "$x^*$ is global minimum iff $\bar 0\in \partial f(x^*)$". What does it mean? Visually?
P.s. Studying Nonlinear-optimization -course, 2.3139.
I haven't understood this theorem "$x^*$ is global minimum iff $\bar 0\in \partial f(x^*)$". What does it mean? Visually?
P.s. Studying Nonlinear-optimization -course, 2.3139.
If $f$ is a convex function, to say that $g \in \partial f(x)$ means that \begin{equation*} f(y) \geq f(x) + \langle g, y - x \rangle \end{equation*} for all $y$.
For example, let $f(x) = |x|$. $f$ is not differentiable at $0$, but $f$ is convex, and $f$ has a subdifferential $\partial f(0)$.
And you can see that $0 \in \partial f(0)$ in this example.
So... note that if $f$ is convex then \begin{align*} & x \text{ is a minimizer of } f \\ \iff & f(y) \geq f(x) \, \forall \, y \\ \iff & f(y) \geq f(x) + \langle 0 , y - x \rangle \, \forall \, y \\ \iff & 0 \in \partial f (x). \end{align*}
I can only understand this in the format:
Suppose $f(x)$ is a convex function. Suppose its Hessian matrix $\nabla^2 f$ is positively definite i.e. its square-form $x \left(\nabla^2 f \right)x^T$ is positive. Now $x^*$ is global minimum iff $\bar 0\in \partial f(x^*)$".
This is understandable because it would mean that the subgradient has a point in which $\nabla f =0$.
Video recommendation
Very good video in point 18-26.00 in Stanford here.