3
$\begingroup$

I have a fundamental question about Lagrange multipliers. Here it is:

I have a function to maximize with respect to a parameter say $\theta$, subject to two constraints. Lets assume that the first constraint is multiplied by the Lagrangian multiplier $\alpha_1$ and the second constraint is multiplied by the second Lagrangian multiplier $\alpha_2$. The function to be maximized is concave and the constraints are either convex or concave in $\theta$

Question: When we get two equations for $\alpha_1$ and $\alpha_2$. Can we say that there is a unique solution for these two parameters?, i.e., are the parameters will be monotone in the equation?

  • 0
    By "these two parameters", you mean the multipliers? How can you maximize a function with respect to one parameter subject to two constraints? Unless you have some degenerate situation, the number of constraints shouldn't be more than the number of degrees of freedom. Also, what do you mean by "two equations for $\alpha_1$ and $\alpha_2$"? You should get three equations, from differentiating the modified objective function with respect to $\theta$, $\alpha_1$ and $\alpha_2$, and you need to solve them for those three -- in which sense do you have two equations for two of them?2012-10-10
  • 0
    Yes $\alpha$ are lagrangian multipler. I will answer the Rest when I am back home.2012-10-10
  • 0
    @joriki I edited the question. Please have a look at. When I solve this Lagrangian, I find one equation of type $y=f(\alpha_0,\alpha_1,x)$ where f is some function and two other equations with two unknowns for $\alpha_0$ and $\alpha_1$, say $f_1(\alpha_0,\alpha_1,x)=k_1$ and $f_2(\alpha_0,\alpha_1,x)=k_2$. where $k_1$ and $k_2$ are some numbers. As I have two unknowns and two equations I can solve them. However I have another similar type of Lagrangian and it has also two Lagrange multipliers. Then finally these equations are coupled. I have finally 4 equations with 4 unknowns..2012-10-10
  • 0
    I still don't understand. Those three equations are all of the same type; they all have $\alpha_0$, $\alpha_1$ and $x$ as variables; I don't see why you're singling out two of them and calling them two equations for two unknowns. I also don't understand what you mean by "two unknowns for $\alpha_0$ and $\alpha_1$", since $\alpha_0$ and $\alpha_1$ *are* the unknowns.2012-10-10
  • 0
    @joriki, yes you are right. I meant $\alpha_1$ and $\alpha_2$ as variables of the two equations. Better I write them here. Please wait.2012-10-10
  • 0
    @joriki done. Now sould be clear I hope.2012-10-10
  • 0
    I'm afraid I can't help you with this one -- but Per seems knowledgeable enough :-).2012-10-10
  • 0
    joriki ahahaha ))) funny. Yes @Per Manne seems knowledgeable. I hope he can elaborate on this matter a bit more.2012-10-11

1 Answers 1

1

The number of constraints does not matter here, as long as there are fewer constraints than variables. If the objective function is strictly concave and the constraints are strictly convex in the variable $\theta$, then there is an easy uniqueness result. (There is also a more complicated result, where you only look at the behavior of these functions along the admissible set, and which involves what is called the bordered Hessian.)

Let $f(\theta)$ be the function you want to maximize, where $\theta$ will have the components $\theta=(\theta_1,\dots,\theta_n)$, and let the constraints be $g_1(\theta)=b_1$ and $g_2(\theta)=b_2$. Introduce the Lagrangian function $$L=L(\theta,\alpha)=f(\theta)-\alpha_1(g_1(\theta)-b_1)-\alpha_2(g_2(\theta)-b_2)$$ The Lagrangian conditions are the same as $${\partial L \over \partial \theta_i}=0,\qquad i=1,\dots,n \\ {\partial L\over \partial \alpha_j}=0, \qquad j=1,2\qquad{}$$ which means that the optimal solution $(\theta^*,\alpha^*)$ should be a stationary point for $L$.

Cases where the gradient vectors $\nabla g_1(\theta)$ and $\nabla g_2(\theta)$ are linearly dependent at some admissible point have to be checked separately, as you are not sure that Lagrange's method will find the answer then.

If $\nabla g_1(\theta)$ and $\nabla g_2(\theta)$ are linearly independent at all admissible points (this is often the case), then the Lagrange conditions above are necessary at an optimal point, and $\alpha^*$ will be uniquely determined.

Now, if $f(\theta)$ is strictly concave and each $g_j(\theta)$ is strictly convex then $L(\cdot,\alpha^*)$ will be strictly concave in $\theta$, which means that it can have at most one stationary point $\theta^*$. Hence the solution, if it exists, is unique.

  • 0
    thanks again. I will ask it in a seperate question as it changes this question alot.2012-10-11
  • 0
    @Per Manne $g_j(\theta)$ doesn't necessarily need to be strictly convex, for $L(\cdot, \alpha^*)$ to be strictly concave, does it? For example, what happens if $f(\theta)$ is strictly concave and $g_j(\theta)$ are both linear?2016-01-27