I would like to minimize a function $f({\mathbf x})$, subject to the constraint that each element of ${\mathbf x}$ is nonnegative. In surveying the literature I see many complicated methods: exterior penalty, barrier functions, etc. However, there seems to be a simple solution: replace the objective function with one that squares each argument, i.e. optimize $f({\mathbf y}^2)$ over unconstrained ${\mathbf y}$ instead. Is there a well-known name for this technique and why isn't it mentioned more often?
Nonnegative nonlinear optimization by squaring
1 Answers
Consider the following inequality-constrained (convex) quadratic program
$$\begin{array}{ll} \text{minimize} & (x_1 - 1)^2 + (x_2 - 2)^2\\ \text{subject to} & x_1, x_2 \geq 0\end{array}$$
Let $x_i =: y_i^2$. We then have an unconstrained quartic optimization problem
$$\text{minimize} \quad (y_1^2 - 1)^2 + (y_2^2 - 2)^2$$
Unfortunately, this quartic objective function is non-convex and has several local minima
Visual inspection of the plot above tells us that, in total, there are $3^2 = 9$ critical points: $4$ local minima, $1$ local maximum and $4$ saddle points. Peeking from "below", we have the tooth-like plot
Taking the partial derivatives and finding where they vanish,
$$y_1 (y_1^2 - 1) = 0 \qquad \qquad \qquad y_2 (y_2^2 - 2) = 0$$
we obtain the $3^2 = 9$ critical points
$$\left( y_1 = 0 \lor y_1 = \pm 1 \right) \land \left( y_2 = 0 \lor y_2 = \pm \sqrt 2 \right)$$
Squaring, we obtain $2^2 = 4$ points
$$\left( x_1 = 0 \lor x_1 = 1 \right) \land \left( x_2 = 0 \lor x_2 = 2 \right)$$
which are plotted below with the quadratic objective function (on the nonnegative quadrant)
In this case, the objective function is a convex, positive definite quadratic form written as a nice sum of squares and without bilinear terms. Hence, we know that the minimum is attained at $(1,2)$, which is in the feasible region.


