4
$\begingroup$

Motivation: Example. To solve a problem on evaluating the maximum of a product of $n$ real variables subject to an equality constraint on its sum $S$ ($=100$), I used the Lagrange multipliers method (which can be improved by adding the adequate conditions on the second order partial derivatives). Doing so I got a "solution" $x^*=(x_1,x_2,\dots ,x_n)\in \mathbb{R}^{n}$ with $x_1=x_2=\dots=x_n=100/n$. I proceeded by maximizing the single real variable function $u(t)=(100/t)^t$: the maximum is at $36\lt t^*\lt 37$. Finally I computed $u(36)\lt u(37)$. Hence, the optimum occurs at $n=37$.

In spite of having come to the correct solution to the problem, I was told that such an approach does not guarantee the correct solution, in the general case.

Question: In order to see some limitations of treating optimization problems as illustrated by the above example but with greater generality, a few examples in which this method fails would be appreciated.

Remark: I hope this edited text improves the question.


EDIT: The case I have in mind is that of finding the maximum/minimum of an objective function $f(x_1,x_2,\dots,x_n)$ subject to at least one constraint $g(x_1,x_2,\dots,x_n)=0$

This is a kind of generalization of the linear programming simplex problem, two differences being a nonlinear optimization and one integer variable.

  • 0
    @Qiaochu Yuan: I edited the text extensively, trying to make the question clear.2010-08-29

2 Answers 2

3

The problem, if I understand it correctly, is best appreciated geometrically by drawing a picture of the level sets of the objective function and the lattice points satisfying the constraints: the optimum (over the reals) can occur at one location, but the nearby lattice points will necessarily lie on inferior contours. Extending those contours identifies a border of almost-optimal locations where the objective is greater than at the lattice point you chose. All you need for a spectacular failure to occur is for there to exist a distant lattice point somewhere within this marginal area: this would correspond to a dramatically different solution satisfying all the constraints and having a better objective value than yours. (Your solution can have arbitrarily bad objective values, too, depending on the gradient of the objective function between the Lagrange multiplier solution and your rounded approximation to it.)

I have encountered exactly this difficulty with certain problems in statistics where the variables count things (and therefore must be integral) and the objective measures the precision of an estimate (and is to be minimized, but the change from maximization to minimization changes nothing essential).

  • 1
    @J. Mangaldan, the objective function can even be linear (and very few things are nicer than that!) and still the approach fail.2010-08-29
3

Here is a contrived example of the failure of the method.

Say you are given the function $f$ defined for $0 \leq x \leq 50$ as follows:

$\displaystyle f(x) = 100 \sin\pi x, 0 \le x \le 1$
$\displaystyle f(x) = x, 1 < x \le 50$.

Your task is to find the integer $n$ for which $f(n)$ is maximized. Your method gives $n=0$ or $n=1$ (max of function occurs at $x = 1/2$), while the correct value we need is $n=50$.

The reason your method works for the example you have is that the function $\displaystyle S(t) = (100/t)^t$ is strictly increasing for $0 < t < 100/e$ and strictly decreasing for $100/e < t$.

  • 0
    @Mariano: I see. Thanks.2010-08-30