I am trying to minimize a convex objective numerically using gradient descent. I select the starting point randomly. I repeat the experiment multiple times. The optimal objective value I get each time is quit the same, but the minimizer is very different. Is it natural? How should it be handled in experiments?
unstable optimizer, stable objective
0
$\begingroup$
convex-optimization
numerical-optimization
1 Answers
0
Yes, this is entirely possible. The fact that a problem is convex just guarantees that any local minimum is a global minimum. You can have many points that reach this global minimum. Intuitively, you can think of a function with a "flat" bottom, somewhat like a bath tub, so two points can be far away but still be minimal.
-
0@Rahul Narain It is $AX+ AXBX$ where $A$ and $B$ are positive semidefinite. Minimization is over matrix $X$. – 2012-10-10