0
$\begingroup$

I want to find the maximum of $$f(x_1, x_2)=-(x_1+3)^2-(x_2-2)^2$$ under the constraints $x_1, x_2\geq 0$.

I thought to calculate the extrema as we would not have constraints and we pink only the positive $x_i$'s.

Is this correct?

  • 1
    You maximize by using the derivatives to find critical points but you also have to maximize the function along the boundaries, for example along the $x_1$ axis the function looks like $f(x_1, x_2)\mid_{x_2=0} = -(x_1+3)^2 - 4$2017-01-18
  • 0
    From the derivatives we have that $$-2(x_1+3)=0 \Rightarrow x_1+3=0 \Rightarrow x_1 =-3$$ This doesn't satisfy the constraint. $$-2(x_2-2)=0 \Rightarrow x_2-2=0 \Rightarrow x_2=2\geq 0$$ This satisfies the constraint but it is not a root of the other partial derivative. Is this a problem? Then do we find also the maximal points at the boundary and then take the maximal of all? @Triatticus2017-01-18
  • 1
    You should compute the partial derivatives of the Lagrangian function (see below in the comment of my answer). Not only of the objective.2017-01-18
  • 0
    Essentially it depends on what you were taught and when, if you did lagranges method then the answers make sense, if not then it will be new information for you. When I learned multivariable calculus I was taught optimization like I mentioned first and then lagranges method later2017-01-18
  • 0
    So, without Lagrange what exactly do we have to do? @Triatticus2017-01-18
  • 1
    If you didn't resort to those methods then follow the reasoning of the comment you asked, I wanted to mention that it is correct intuition, and as you said just find the maximum value of the functions on the boundaries which will lead to the same derieved answer as the given ones.2017-01-18
  • 0
    We have that $$f_{x_1}=-2(x_1+3), f_{x_2}=-2(x_2-2)$$ Since they don't have common roots we are looking o the boundaries, so when $x_1=0$ and $x_2=0$. $$$$ For $x_1=0$, we have $g(x_2)=f(0, x_2)=-3^2-(x_2-2)^2=-9-(x_2-2)^2$. The derivative is $g'(x_2)=-2(x_2-2)$ uand the root is $x_2=2$. Since $g''(x_2)=-2<0$. $g$ has at $x_2=2$ a maximum. So, $f$ has at $(0,2)$ a Maximum. $$$$ For $x_2=0$, we have that $g(x_1)=f(x_1, 0)=-(x_1+3)^2-(-2)^2=-(x_1+3)^2-4$. Then $g'(x_1)=-2(x_1+3)$ and the root is $x_1=-3$. This point does not satisfy the contraint. Is this correct? @Triatticus2017-01-18
  • 1
    The last part means that only $x_1=0$ works for that one (this is what the other comments meant by projection onto zero for $x_1$), and upon substitution into the equation yields a smaller value than when you substitute $(0,2)$, hence $(0,2)$ is the maximum.2017-01-18
  • 0
    Ah ok. If we have instead of the constraints $x_1, x_2\geq 0$, the constraints $2=\min \{x_1, x_2\}$. What would we do here? Do we have to check again the boundaries? Are they now $x_1=2$ and $x_2=2$ ? @Triatticus2017-01-18
  • 1
    A little harder of course, subject for additional questions likely, read over the given answers and decide which makes sense to you more and you can make another question for different choices of the constraint. Only to avoid extended conversation in comments2017-01-18
  • 0
    Ok!! Thank you!! :-) @Triatticus2017-01-20

2 Answers 2

2

For this problem (due to convexity as pointed out by Alex) the KKT conditions are necessary and sufficient. The Lagrangian is: $$L(x,y) = -(x_1+3)^2-(x_2-2)^2+y_1 x_1 + y_2 x_2$$ The KKT conditions are therefore: $$-2(x_1+3) + y_1 = 0$$ $$- 2(x_2-2) + y_2 = 0$$ $$x_1 y_1 = 0$$ $$x_2 y_2 = 0$$ $$x,y \geq 0$$ If $y_1=0$, $x_1 = -3<0$, so $y_1 > 0$, which is a contradiction. Therefore $x_1=0$ and $y_1=6$. If $x_2=0$, $y_2=-4<0$, which is a contradiction. Therefore, $x_2 > 0$, $y_2=0$, $x_2 = 2$. So, $x=(0,2)$ is the optimal solution.

  • 0
    The third and fourth condition are not the derivarives the of the lambdas of Lagrange, right? So, we take only the derivatives of $x_1$ and $x_2$ ?2017-01-18
  • 1
    Correct. The third and fourth conditions are complementary slackness conditions, see [wikipedia](https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions).2017-01-18
0

Let $g(x_1,x_2) = -f(x_1,x_2) = (x_1 +3)^2 +(x_2 -2)^2$. Maximize $f(x_1,x_2)$ is equivalent to minimize $g(x_1,x_2)$.

Notice that $g(x_1,x_2) \geq 0, \forall x_1,x_2 \in \mathbb{R}.$ Because $g$ is strictly convex, you can solve the unconstrained problem and project the negative solutions to the zero value. The minimum of $g$ is zero when $x_1 = -3$ and $x_2 = 2$.

Thus, the maximum value of $f$ under the constraints $x_1,x_2 \geq 0$ is attained when $x_1 = 0$ and $x_2 = 2.$

  • 0
    Shouldn't the point $x_2$ be a root of both partial derivative, so that it can be a extremum?2017-01-18
  • 0
    Or isn't this necessary?2017-01-18
  • 1
    Do you have a source for the nonconvexity/projection? Is it only for polynomials?2017-01-18
  • 1
    Why is nonnegativity relevant? Any function bounded from below can be made nonnegative by adding a constant without changing the location of the optimum.2017-01-18
  • 0
    We compute the partial derivatives just of $x_1$ and $x_2$ or also of $\lambda_1$ and $\lambda_2$ ? @AlexSilva2017-01-18
  • 0
    But in LinAlg's anser at the system it is not the derivative as for the lambdas, is it? It is the lambdas multiplied by the conditions, or not? @AlexSilva2017-01-18
  • 1
    What if $g(x_1,x_2) = \max\{0, x_1 - x_2 + 1\}$ with $x \geq 0$? An unconstrained minimizer is $(-1,0)$. Projecting onto the nonnegative orthant we obtain $g(0,0) = 1$ while $g(0,1) = 0$.2017-01-18