2
$\begingroup$

$\mathbf{Q}$: We consider a standard form linear programming problem, where $\mathbf{A} \in {\mathbb{R}^{mxn}, \mathbf{c} \in \mathbb{R^{n}}, \mathbf{b} \in \mathbb{R^{m}} }$ and the decision variable $\mathbf{x}$ is in $\mathbb{R}^n$. Prove or disprove:

If there are several optimal solutions, then the set of optimal solutions lies in the convex hull of all the basic feasible solutions.

$\mathbf{My }$ $\mathbf{answer}$:True.

I chose this answer as it was the most intuitive answer and thought of proving by contradiction, but unfortunately I am unsure of how I can go about proving this. Some help will be deeply appreciated.

2 Answers 2

1

You haven't told us what your standard form is. I'll assume that it's

$\min c^{T}x$

$Ax=b$

$x \geq 0$

This is false, for the simple reason that you can have an unbounded set of optimal solutions that cannot be the convex hull of a finite set of points.

For example, try

$\min x_{1}-x_{2}$

$x_{1}-x_{2}=0$

$x \geq 0$

Here, the only BFS is $x=0$, but there are many other optimal solutions such as $x_{1}=1$, $x_{2}=1$.

  • 0
    Ah yes this makes things clearer quite a bit. Am I right to say that in precise terms, an unbounded set of optimal solutions has infinitely many optimal solutions which cannot be contained in a finite set of points whose cardinality is greater or equal to that of the number of optimal solutions? Do correct me if I am wrong.. Thanks :)2017-02-25
  • 0
    You can certainly have a bounded but infinite collection of optimal solutions. Consider $\max x_{1}+x_{2}$, $x_{1}+x_{2}=1$, $x \geq 0$. In that example there are only two BFS's, but all of the optimal solutions can be written as convex combinations of the two BFS's.2017-02-25
  • 0
    Alright I get the idea now.. Thanks for the insight and clarification! :)2017-02-26
2

For the linear programming problem

$${\begin{aligned}&{\text{maximize}}&&\mathbf {c} ^{\mathrm {T} }\mathbf {x} \\&{\text{subject to}}&&A\mathbf {x} \leq \mathbf {b} \\&{\text{and}}&&\mathbf {x} \geq \mathbf {0} \end{aligned}}$$ if we have feasible solutions $\mathbf {x}_1$ and $\mathbf {x}_2$ providing the maximum of $\mathbf {c} ^{\mathrm {T} }\mathbf {x} $, then a convex combination $\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2$ is also feasible (because the constraints are linear). Moreover, as soon as $\mathbf {c} ^{\mathrm {T} }\mathbf {x}_1 = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_2$ we have

$$\mathbf {c} ^{\mathrm {T} }\left(\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2\right) = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_1 = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_2.$$ So the vector $\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2$ solves the problem as well.

These reasons are extended straightforwardly to the case of many basic solutions $\mathbf {x}_1,\dots\mathbf {x}_n$.

  • 0
    Hmm I get the idea.. But how about the case where there is only one basic feasible solution that is optimal and with infinitely many optimal solutions? Considering the fact that the question uses a standard form LP, which can be a polyhedron.2017-02-25
  • 0
    The counter example I thought of is given as follows: min $x_2$ s.t. $x_2 = 0$, $x_1,x_2 \geq 0$. Then the only basic feasible solution is (1,0).2017-02-25
  • 0
    @Stoner, I could be wrong, but all vectors $(\lambda, 0)$ are basic feasible solutions for your example, aren't they? And this set is equal to convex hull of itself2017-02-25
  • 0
    The example I gave has a set of optimal solutions $[0, \infty )$ x {1}, which is unbounded. In particular, with $(1,0)$, there are 2 active constraints and the rank is 2, so $(1,0)$ is a basic feasible solution (and the only one) that is optimal.2017-02-25
  • 0
    Ahh hold on I think I get your point now.. Am I right to say that since the convex hull of all the basic feasible solutions have only one point - $(1,0)$, with my example, all the points also lie within this convex hull? Sorry for the confusion as I am still relatively new to this topic..2017-02-25
  • 0
    @Stoner, I'm also new to this topic and try to elaborate on that :) As I've caught up, the vectors $(\lambda, 0)$ are also basic solutions, as they have 2 active constrants. Could you clarify, why $(1,0)$ is the only one basic solution? Do you mean that basic solutions must be linearly independent? I've not seen this condition in the literature2017-02-25
  • 0
    Oh my.. Alright this is embarrassing. I made an error in the counter example above. It should be: min $x_2$ $s.t.$ $x_2 = 1$, $x_1, x_2 \geq 0$. So, the only BFS is $(0,1)$, and there are infinitely many optimal solutions such as $(1,1)$. I hope things are clearer now.. Also, I will like to add that the question asks for standard form LPs, which only have equality constraints, not inequality constraints $Ax \leq b$ which you have provided.2017-02-25