6
$\begingroup$

The problem is the following

Given $v_1, \, v_2, \, \ldots, \, v_n \in \mathbb R^m$; find $\epsilon_1, \, \epsilon_2, \, \ldots, \, \epsilon_n \in \{0,1\}$ such that $\left\vert \sum_{i=1}^n (-1)^{\epsilon_i} v_i \right\vert = \max$ where $\vert \cdot\vert $ denotes the Euclidean norm.

I.e. we want to maximize the length of a sum of vectors, under the restriction that for every vector we must use either the vector itself or $(-1)$ times this vector.

This came from some application and I was asked whether I knew how to obtain a solution some time ago. I didn't see an efficient algorithm and since the number of vectors typically was sufficiently small, it could be solved by simply going through all possibilities.

I thought about it again, today. And the problem really looks like there might be a nice efficient algorithm to solve it - at least better than $\Theta(2^n)$.

Still not being able to find one, I wanted to ask whether someone here could figure out a neat solution!


Something that doesn't work:

  • Order the vectors by length: $|v_1|\ge |v_2|\ge \dots \ge |v_n|$
  • Having found a maximal vector-combination $v = \sum \pm v_i$ from $v_1, \, v_2, \, \ldots, \, v_k$, choose the sign of $v_{k+1}$ so that it maximizes the length of $v \pm v_{k+1}$.

A counterexample is given by $\begin{pmatrix} 3\\ 2\end{pmatrix},\, \begin{pmatrix} -2\\ 3\end{pmatrix},\, \begin{pmatrix} 2\\ 3\end{pmatrix}$. Combining the first two vectors we get $\begin{pmatrix} 1\\ 5\end{pmatrix} \quad \text{or}\quad \begin{pmatrix} 5\\ -1\end{pmatrix}$ both of which have the same length. However, if we choose the first combination we can get $\begin{pmatrix} 3\\ 8\end{pmatrix}$ whereas for the second combination $\begin{pmatrix} 7\\ 2\end{pmatrix}$ is the best we can do.

Feel free to share your thoughts! Thanks. =)

  • 0
    @joriki: In order to enumerate the relevant half-spaces in $\mathbb{R}^m$, you can look at all choices of $m-1$ vectors from the set of $n$. This gives an algorithm that is $O(n^{f(m)})$, i.e., polynomial-time for fixed $m$, but with an exponent that grows with $m$. I don't think you can do better...2012-01-13

3 Answers 3

2

A cool terminology (which is not directly related to the answer): Letting A be an m×n matrix whose columns are given by vectors v1, …, vn, your problem is concisely stated as computing the “subordinate (∞,2)-norm” of matrix A.

This problem is NP-hard, which implies that there is no polynomial-time algorithm unless P=NP. I do not know what the best reference for it is, but one place where it is proved to be NP-hard is [GK89]. Check the problem called ZonMax in [GK89], which is a slight variation of your problem which is easily seen to be equivalent to yours. I think that part of the reduction in [GK89] can be simplified by using the NP-completeness of the subset sum as is done in [MK87].

[GK89] Peter Gritzmann and Victor Klee. On the 0-1 maximization of positive definite quadratic forms. IMA Preprint Series #522, April 1989. http://www.ima.umn.edu/preprints/Jan89Dec89/522.pdf

[MK87] Katta G. Murty and Santosh N. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Mathematical Programming, 39(2):117–129, June 1987. DOI: 10.1007/BF02592948. (I checked only the technical report version.)

  • 0
    Thank you for your answer!2012-01-20
1

I don't think there is a deterministic polynomial time algorithm for this. I remember seeing problems like this in my randomized algorithms course, and I would recommend trying the conditional expectation derandomization technique described here (and other places).

  • 0
    Thank you for your answer! I will make sure to have a look at it.2012-01-12
1

This can be formulated as a quadratically constrained quadratic program.

Maximising $\left\vert \sum_{i=1}^n (-1)^{\epsilon_i} v_i \right\vert$ is equivalent to maximising $\begin{eqnarray} \left\vert \sum_{i=1}^n (-1)^{\epsilon_i} v_i \right\vert^2 & = & \left( \sum_{i=1}^n (-1)^{\epsilon_i} v_i \right).\left( \sum_{j=1}^n (-1)^{\epsilon_j} v_j \right) \\ & = & \sum_{i=1}^n \sum_{j=1}^n (1 - 2\epsilon_i) (1 - 2\epsilon_j) v_i . v_j \end{eqnarray}$ (where the $-1^\epsilon$ to $(1-2\epsilon)$ transformation is valid because of the constaint $\epsilon\in{0,1}$) and since the constant term is irrelevant, this is in turn equivalent to maximising $\sum_{i=1}^n \sum_{j=1}^n (4 \epsilon_i \epsilon_j - 2 \epsilon_i - 2 \epsilon_j) v_i . v_j$

This objective function is convex, as Sam points out in a comment. As mentioned in the Wikipedia article, $\epsilon\in\{0,1\}$ can be converted to quadratic constraints as $\epsilon(\epsilon - 1) \le 0$ and $\epsilon(\epsilon - 1) \ge 0$, and these constraints are also convex, although not semi-definite.

I'm not sure which algorithms for QCQP are applicable and efficient in this convex-but-not-semi-definite case, but it does look hopeful that this might not be NP-hard, and the reduction to a known problem gives a line of investigation.

  • 0
    @PeterTaylor: Minimization of a convex quadratic function over a polytope is solvable in polynomial time by using ellipsoid or interior-point method. Therefore, probably the reduction you are thinking of does not work.2012-01-17