0
$\begingroup$

For a general convex program, a feasible point is an optimal solution if and only if it lies in a hyperplane whose a normal vector is the gradient to the objective function at this point. Please suggest what will be the form of this result in case of an invex function involving a non-linear function eta.

For an convex function $f$, $x$ is an optimal solution if and only if $\langle \nabla f(x) , y-x \rangle \geq 0$ which explains the the fact that gradient of the objective function is the normal to the hyperplane at x.

For an invex function $f$, $x$ is an optimal solution if and only if $\langle \nabla f(x) , n(y,x) \rangle \geq 0$, where $n(y,x)$ is a nonlinear function.

Looks like in case of an invex function, the gradient of the objective at an optimal point must make an acute angle with all the non linear curves $n$ ?

  • 0
    @Gort$a$ur: I am looking for a geometrical interpretation of optimality conditions for differential invex functions, just like the ones provided for differentia$b$le convex functions above.2011-09-26

1 Answers 1

2

The Wikipedia article says that if the objective and constraints are invex wrt the same $g(x,u)$, the Karush-Kuhn-Tucker conditions are sufficient for a global minimum. Geometrically the Karush-Kuhn-Tucker conditions say that the gradient of the objective is in the cone generated by the outward normals of the active constraints.