2
$\begingroup$

I finding some difficulties in solving the below constrained problem using Lagrangian. Would be great if some one helps me with the steps.

$\min_C \sum_i \Psi(c_i)$
subject to $\sum_i c_i = 1$ and $c_i \geq 0$ for $i=1 \cdots k$ and $C=[c_i]$

Here $\Psi(x)$ is a concave function. for eg. $\Psi(x) = 2*x -x ^2$

I tried the below steps:
1. Writing Lagrangian $L = \sum_i \Psi(c_i) + \gamma (\sum_i c_i - 1) - \sum_i \alpha_i c_i$
2. Differentiating Lagrangian wrto $c_i$ and equating to zero. i.e \psi'(c_i) + \gamma - \alpha_i = 0

Im not sure how to proceed after this. (what value to find out and wht needs to be satisfied etc)

  • 0
    yes. here $\psi(c_i)$ is a concave function2011-12-19

1 Answers 1

1

This problem can be solved by using the following theorem (one of my favorites):

A convex function on a compact convex set attains its maximum at an extreme point of the set.

Note that $\min_C \sum_i \Psi(c_i)$ is equivalent to $\max_C -\sum_i \Psi(c_i)$ and that $-\sum_i \Psi(c_i)$ is a convex function (because the sum of convex/concave functions is convex/concave).

The extreme points of the compact convex set defined by $\sum_i c_i = 1$ and $c_i \geq 0$ are $C_j=[\delta_{ij}]$ with $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ otherwise. It's easy to see that each $C_j$ is a solution, and that the sought minimum is $\Psi(1)+(k-1)\Psi(0)$. (There might exist more solution $C$ than that in case $\Psi$ is not strictly concave.)


So let's see how we can find these solutions using the Lagrangian. The above solutions $C_j$ correspond to \gamma = -\Psi'(c_j) and \alpha_i = \Psi'(c_i) + \gamma.

You might ask: "But what is so special about this solution, and why does it help me to solve the minimization problem?" The important point here are the "complementary slackness" conditions (see KKT conditions), which come into play here, because we are dealing with inequalities. (The Lagrange multiplier technique is often introduced only for the case of equalities, but generalizations like the KKT conditions allow to use it also for the case of inequalities.)

Because of $\sum_i c_i=1$, there must be at least one $j$ with $c_j>0$. Because of the "complementary slackness" conditions, the assumption $c_j>0$ leads to $\alpha_j=0$, which implies \gamma = -\Psi'(c_j). For the other $c_i$, we can now distinguish the cases $\alpha_i=0$ and $\alpha_i\neq0$. Because of the "complementary slackness" conditions, $\alpha_i\neq0$ implies $c_i=0$. The case $\alpha_i=0$ leads to c_i = \Psi'^{-1}(-\gamma). By that approach you would have to check $2^{k-1}$ candidate solutions, which is theoretically feasible, but not very practical. So you better make use of the special knowledge provided by "convexity"/"concavity" instead of only relying on the general Lagrangian technique. (Note however that the minimization of concave functions over convex domains is still an NP-hard problem...)