0
$\begingroup$

Can we write

min($\sum_{i}$ max($a^T_i$x - $b_i$,0))

as a constrained optimization problem with a differentiable objective function assuming everything is real-valued. My understanding is that this is a 'known' problem in optimization, but I haven't found a good description of how this works.

1 Answers 1

2

Here is an equivalent linear program: $P_{LP}: \ \min \{ \sum_k m_k | m_k \ge a_k^T x -b_k, m_k \ge 0 \}$.

Let $P_O$ denote the original problem.

Suppose $(x,m)$ solves $P_{LP}$, then $m_k \ge 0 $ and $m_k \ge a_k^T x -b_k$ and hence $m_k \ge \max(a_k^T x -b_k,0)$. In fact, we must have $m_k = \max(a_k^T x -b_k,0)$ and so $x$ solves $P_O$.

If $x$ solves $P_{O}$, let $m_k = \max(a_k^T x -b_k,0)$, then $(x,m)$ solves $P_{LP}$.

  • 0
    What does the vertical bar mean in this context ?2017-02-13
  • 1
    It separates the objective from the constraints. Or you could view it as a set $\{ \sum_k m_k | m_k \ge a_k^T x -b_k, m_k \ge 0 \}$ that you are taking the minimum of.2017-02-13
  • 0
    Would you mind showing me or pointing to a fairly easy to follow proof of this result? I am having trouble getting my mind around it.2017-02-14
  • 1
    @Cogitator: I added some hints above.2017-02-14
  • 0
    Why the downvote?2017-02-18