How do you enforce positivity constraints in non-linear optimization (e.g. a constraint $x > 0$)? I remember there being a good reason for why most models use non-negativity constraints.
Positivity constraints in optimization
1
$\begingroup$
optimization
-
0How about the KKT conditions? http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions – 2012-01-18
-
0@matt:They handle non-negative and equality constraints. – 2012-01-18
-
0:You could use "Log Barrier methods". I will post an answer with more detail. – 2012-01-19
1 Answers
3
Consider the following: $$\begin{array}{rll} \min_x& f(x)&\\ \text{subject to}& g_i(x)\leq0 &\text{for each }i\\ & h_i(x)=0 &\text{for each }j \\ \end{array}$$
For $\alpha>0$ we define the log barrier penalty function, $P_\alpha$, to be:
$$ P_\alpha(x)=f(x)-\frac1\alpha\sum_i\log(-g_i(x))+\alpha\sum_jh_j(x)^2 $$
where $x$ must be strictly feasible, i.e. $g_i(x)<0$ for each $i$, in order for the log term to be defined.
We seek to minimise $P_\alpha(x)$. The idea is that the boundary of the feasible region (i.e $g_i(x)=0$) acts a a barrier for $x$ close to $0$.
-
1Should that be $\log(-g_i(x))$? I think you have an extra left parenthesis. – 2014-11-17