0
$\begingroup$

I am looking for a good strategy to tackle this non-linear, non-convex optimization problem:

Minimize $$\frac{c_1x + c_2 y}{x+y}$$

such that:

$x, y > 0$

$x + y \leq c_3$

$c_1, c_2, c_3$ are given

$c_3 > 0$

Does anyone have any suggestions?

I know this objective function is non-convex, but I was wondering if there were any smart ways to find (or at least approximate) the global optimum. If you suggest an approximation algorithm, please also share its approximation factor. Computational efficiency is not a major concern for me, but accuracy is.

Currently, my leading (though inelegant) idea is to fix the value of $x + y$ and iterate through possible combinations. In the context of my problem, it is an OK assumption to say there are a finite number of meaningful combinations of $x + y$, so I could conceivable iterate through them, but this would be very inefficient.

In case it helps, there is the option for the additional constraint that $c_1, c_2 > 0$, but it's preferable to retain the freedom for them to be either positive or negative.

Thanks in advance for your help.

  • 0
    I edit your post. Is that what you mean?2017-01-28
  • 0
    Yes, thank you for improving its clarity! :)2017-01-28

3 Answers 3

1

Hint: for any $\forall x,y \gt 0$: $$\min(c_1,c_2) \le \frac{c_1x + c_2 y}{x+y} \le \max(c_1,c_2)$$ The bounds are asymptotically approached when $y \to 0$ and $x \to 0$ respectively, but never attained unless $c_1=c_2\,$.

  • 0
    Thanks for this hint. This is helpful for me to understand that approximating the objective function as either $c_1$ or $c_2$ will give me a solution bounded by this error. However, if there is more to your hint, I'm afraid I've missed it. Is there a way to proceed with solving it?2017-01-28
  • 0
    @user89413 Assuming WLOG $c_1 \le c_2$, the global minimum of the objective function is $c_1$ and is attained at $y=0$, while the global maximum is $c_2$ at $x=0$. Since you require $x,y$ to be *strictly* positive $x,y \gt 0$ those extrema cannot be actually attained, yet you can get arbitrarily close to either bound by choosing small enough values for $y$ or $x$ respectively.2017-01-28
  • 0
    OK, thanks for this clarification. I think I understand now. Does this property also hold for an arbitrary number of variables? I.e., does the following hold: $$min(c_1,c_2,...,c_n)≤\frac {\sum_{i=1}^n (c_i*x_i)}{\sum_{i=1}^n x_i}≤max(c_1,c_2,...,c_n)$$2017-02-05
  • 0
    @user89413 Yes, it does, as long as $x_i \gt 0\,$ That follows because $m \le c_i \le M$ $\implies$ $m x_i \le c_i x_i \le M x_i\,$ $\implies$ $m \sum x_i \le \sum c_i x_i \le M \sum x_i\,$.2017-02-05
  • 0
    Ah, yes, that makes sense. Thank you!2017-02-05
1

If $x=0$ or $y=0$ you know the value. Otherwise, divide through by $y,$ define $$ r = x/y, $$ and optimize $$ \frac{c_1 r + c_2}{r+1} = \frac{c_1 r + c_1}{r+1} + \frac{c_2 - c_1}{r+1} = c_1 + \frac{c_2 - c_1}{r+1} $$ The other order has $s = y/x $ and $$ \frac{c_1 + c_2 s}{1+s} = \frac{c_2 + c_2 s}{1+s} + \frac{c_1 - c_2}{1+s} = c_2 + \frac{c_1 - c_2}{1+s} $$ Either $r$ or $s$ is allowed to get arbitrarily large without violating the rules.

  • 0
    Thanks for this suggestion. I am confused because the constraints $r$ and $s$ are non-convex and I don't know how to accommodate the non-convexity when I want to solve. Can you please explain further?2017-01-28
  • 0
    @user89413 I ignored issues of convexity by making the whole thing a calculation in one variable. If $c_1 < c_2,$ take the first version and l,let $r$ be as large as possible. In effect, take $x = c_3, y=0.$ As pointed out in the other answer, the optimum cannot actually be achieved unless the rules are modified to $x,y \geq 0.$2017-01-28
  • 0
    OK, thanks for the clarification. That is a clever idea.2017-02-05
-1

\begin{eqnarray} \inf_{x>0,y>0, x+y \le c_3} { c_1 x + c_2 y \over x+y } &=& \inf_{r \in (0,c_3]} \inf_{x>0,y>0, x+y = r} { c_1 x + c_2 y \over x+y } \\ &=& \inf_{r \in (0,c_3]} \inf_{x>0,y>0, x+y = r} { c_1 x + c_2 y \over r } \\ &=& \inf_{r \in (0,c_3]} \min_{x\ge 0,y \ge 0, x+y = r} { c_1 x + c_2 y \over r } \\ &=& \inf_{r \in (0,c_3]} \min(c_1,c_2) \\ &=& \min(c_1,c_2) \\ \end{eqnarray} The fourth line follows because the minimisation of a linear functional over the convex hull of a finite number of points (in this case, $(r,0), (0,r)$) can be replaced by the minimisation over those points.

  • 0
    Why the downvote?2017-01-30
  • 0
    Thanks for your suggestion. I don't know who downvoted or why. I think I follow your logic.2017-02-05
  • 0
    @user89413: The problem is 'almost' convex. (As an aside, I believe I know who the downvoter was, someone who was annoyed by something I wrote.)2017-02-05
  • 0
    Could you please explain further what you mean by 'almost' convex? I haven't heard of this concept before. In the abstract, I can understand why something being close to convex would be helpful because you could use the convex hull without introducing much error. But I haven't heard the term 'almost' convex before so I don't know if it refers to specific criteria.2017-02-05
  • 0
    Sorry, I was speaking very loosely. In the above, once I add in the $r$ variable, the resulting inner problem becomes a convex problem which is easy to solve. As it turns out, the inner problem is independent of $r$, which makes it even easier. However, the objective is pseudoconvex which is weaker than convexity, but shares many of its nice properties. Look at linear fractional programming, for example.2017-02-05
  • 0
    Thanks for the clarification and suggestions. I will look further into those subjects.2017-02-05
  • 0
    I am trying to learn more about pseudoconvex optimization. Do you have any favorite reference you'd recommend? In particular, I'm interested in general optimization methods. E.g., when can a gradient-based method like interior-point find the global solution for a nonlinear fractional program? Thanks to your suggestion to look at linear fractional programming, I understand pretty well how to approach those problems, but I don't quite understand nonlinear fractional programming.2017-02-05
  • 0
    Sorry, I don't have an adequate reference for pseudoconvex optimization, its been a few decades :-(.2017-02-05