0
$\begingroup$

I am trying to minimize a convex objective numerically using gradient descent. I select the starting point randomly. I repeat the experiment multiple times. The optimal objective value I get each time is quit the same, but the minimizer is very different. Is it natural? How should it be handled in experiments?

1 Answers 1

0

Yes, this is entirely possible. The fact that a problem is convex just guarantees that any local minimum is a global minimum. You can have many points that reach this global minimum. Intuitively, you can think of a function with a "flat" bottom, somewhat like a bath tub, so two points can be far away but still be minimal.

  • 0
    But the objective I am minimizing is quadratic. It should not be so flat?!2012-10-10
  • 0
    @user25004 consider even a low dimensional linear program where the feasible set is a polygon, you can still see intuitively that you can have a "flat" bottom...2012-10-10
  • 0
    @user25004, is your quadratic positive definite or merely positive semidefinite? Consider minimizing $f(x,y) = y^2$ over $(x,y)\in\mathbb R^2$. (Or even $f(x,y) = 10^{-6}x^2 + y^2$, what with the slow convergence of gradient descent...)2012-10-10
  • 0
    @Rahul Narain It is $AX+ AXBX$ where $A$ and $B$ are positive semidefinite. Minimization is over matrix $X$.2012-10-10