5
$\begingroup$

Consider the $n$-dimensional quadratically constrained quadratic optimization problem

$$\begin{array}{ll} \text{maximize} & \frac12 x^T A x + b^T x\\ \text{subject to} & \| x \|_2 \le 1\end{array}$$

where $A$ is a symmetric $n\times n$ matrix that may be indefinite. Given the symmetry of the constraint, is there a nice closed-form solution, perhaps in terms of the eigendecomposition of $A$?

1 Answers 1

3

This isn't pretty, but it's direct. We proceed by the usual method of finding the critical points of a function followed by the critical points on the boundary. The maximum occurs at once of these points.

To find the critical points of the objective function, differentiate:

$$ df(x) = (1/2) (dx)^T A x + (1/2) x^T A dx + b^T dx = (dx)^T (Ax + b) $$

so the critical points are solutions to $Ax = -b$.

Using Lagrange multipliers to find the critical points on the boundary of your constrant $x^T x = 1$ says they occur at solutions $(x, \mu)$ to the system of equations

$$ (A + 2 \mu I) x = -b \qquad \qquad \qquad x^T x = 1$$

Most solutions can be obtained by solving the equation

$$ || (A + 2 \mu I)^{-1} b ||_2^2 = 1 $$

for $\mu$. However, we must consider separately the points $\mu$ for which $-2\mu$ is an eigenvalue of $A$.

For each of these $\mu$, obtain the general solution to

$$ (A + 2 \mu I) x = -b $$

and and search the solution space for unit vectors. Typically there won't be any solutions at all! But if there are, you have to search the solution space for unit vectors. This is just solving a polynomial equation in $n$ variables, where $n$ is the multiplicty of the eigenvalue $-2 \mu$.


I get essentially the same thing from the eigen-decomposition. Write everything in terms of an orthonormal basis of eigenvectors. Then, you are trying to maximize

$$ (1/2) \sum_i x_i^2 \lambda_i + x_i b_i$$

subject to the constraint

$$ \sum_i x_i^2 = 1 $$

The criticial points of the objective function are the solutions to

$$ x_i \lambda_i + b_i = 0$$

and Lagrange multipliers gives us a system of equations

$$ \sum_i x_i^2 = 1 \qquad \qquad x_i( \lambda_i + 2 \mu) + b_i = 0 $$

and the solution method to this is effectively the same as the matrix version.

  • 0
    Solving $\lVert(A+2\mu I)^{-1}b\rVert_2^2=1$ isn't direct, though, is it?2012-05-26
  • 0
    It may be a pain, but I don't think you can avoid it. Still, it boils down to a polynomial equation in one real variable, so it's not *that* bad.2012-05-26
  • 0
    You mean it boils down to a polynomial equation in one real variable if we know the eigendecomposition of $A$?2012-05-26
  • 0
    @joriki: Or, you could just compute $(A + 2 \mu I)^{-1} b$ directly. But if you have the eigen decomposition of $A$, you could use it to find the inverse, of course.2012-05-26
  • 0
    I think I see what you mean. $(A+2\mu I)^{-1}$ is obviously not polynomial in $\mu$, but it is the ratio of the adjugate and the determinant of the matrix $A+2\mu I$, both of which are polynomial in $\mu$, so the equation $\lVert(A+2\mu I)^{-1}b\rVert_2^2=1$ can be manipulated into a polynomial form. Is that right?2012-05-26
  • 0
    Um, and why can't we have a solution with $||x||<1?$ The OP did not say $||x||=1$.2012-05-26
  • 0
    @Thomas: You can: that was the first thing in my calculation. Any maximum with $||x|| < 1$ will occur at a critical point of the objective function, and so must be a solution to $Ax = -b$.2012-05-26
  • 0
    Well, this solution is more complicated than I had hoped, but no more complicated than I had any right to expect. Accepted.2012-05-30