3
$\begingroup$

Hope you are well.

I am working on an optimization problem, quadratic (see below). Of the 4 variables there are but 2 that have a negativity constraint. Am I correct to say that gradient descent is my best option (my suggestion to solve this is shown below the problem at hand).

EDIT: Change in the problem: Of the 4 variables there are 2 that have a negativity constraint ($x_1$ and $x_2$), in addition all parameters are physically limited (box constraints $l<=x_i <=u$ $i= 1,...6$). Noting that some values for $w_i$ $i= 1,...6$ the Hessian can become indefinite (one eigenvalue changes sign). A convex solver (QP) is therefore In itself not robust enough, hence my suggestion for Gradient Descent (multiple random initializations and use the lowest one).

EDIT: Original question

For some values of $w_i$ $i= 1,...6$ the Hessian becomes non-(semi)definite, with eigenvalues of 0 or negative sign. This would imply that the becomes non-convex (right?). Given only one extremum, wouldn't this imply a concave function, or would it imply a saddle point? Not sure how to see the difference based on the eigenvalues of the Hessian.

Hence in these cases the optimization problem would/can have no solution, correct? Or is there (given that primarily the first eigenvalue is an issue I get confused).


$min_{x_1 -x_4}~~~~ J = J_1 + J_2 + J_3 + J_4 $

where,

$J_1 = w_1 (A-x_1 -x_2)^2 $

$J_2 = w_2 (B - x_3 - x_4)^2$

$J_3 = w_3 (C - [dx_1 - dx_2 + ex_3 -fx_4])^2$

$J_4 = w_4 x_1^2 + w_4x_2^2 + w_5 x_3^2 + w_6 x_4^2$

s.t.

EDIT: introduced box Constraint.

L <= x <= U (box constraint)

with the special case that:

$x_1 <=0$ and $x_2 <=0$

  • 2
    You could try projected gradient methods.2017-02-16
  • 0
    Wouldn't this still fail for certain combinations for $w_1 - w_6$ where the Hessian het an eigenvalue of 0 (or negative)?2017-02-17
  • 0
    On a side note: If the Hessian's eigenvalues are (strictly) positive, the function is (strongly) convex. If they are (strictly) negative, the function is (stronly) concave. A saddle point occurs when you have eigenvalues of both positive and negative sign.2017-02-17
  • 0
    Thank you both for your time. In that a variant of stochastic gradient descent (projected into the feasible region) should do the trick. Chances of it getting stuck in the saddle point are quite slim, I suppose. I only have to specify more constraints (gradient descent in an infinite search space is impossible).2017-02-17
  • 0
    Introducing squared slack variables, $x_i \leq 0$ becomes $x_i + s_i^2 = 0$, or, $x_i = -s_i^2$. Then substitute. One ends up with an unconstrained quartic problem in $s_1, s_2, x_3, x_4$.2017-02-17
  • 0
    Thank you for your time. I have updated the problem statement. It is possible for me to define more constraint (to get a more physically representative solution) to limit the search space. I still presume projected gradient methods are my best option given the "non-convexity". Please do tell me if I'm wrong (did not manage to implement LOQO in the past days). In the projected gradient method, active constraint is referred to as a constraint that is (close to) being violated. Correct?2017-02-17
  • 0
    If you use steepest descent (starting from a feasible point) and just project the gradient onto the constraints (nearest point, which is simple in the context of box constraints) then you will have a functional (& slow) algorithm (assuming some reasonable step size).2017-02-19

0 Answers 0