As written above, the nice thing about the KKT conditions is that they allow you to check any solution as it was a black box without a reference solver just by validating its $ \lambda $ on the KKT conditions.
Yet while there are many solvers for this I'm adding another approach - The Projected Gradient Descent.
The Problem Statement
$ \begin{align*} \arg \min_{x} & \quad & \frac{1}{2} \left\| A x - b \right\|_{2}^{2} \\ \text{subject to} & \quad & C x \leq d \end{align*} $
The Solution
Now, if we had a closed form projection into the set of the Linear Inequality (Convex Polytop / Convex Polyhedron), which is a Linear Inequality Constraints Least Least Square problem by itself, using the Projected Gradient Descent was easy:
- Gradient Descent Step.
- Project Solution onto the Inequality Set.
- Unless converged go to (1).
Yet, another way is to use Alternating Projections.
Since the set:
$ \mathcal{S} = \left\{ x \mid C x \leq d \right\} $
Can be decomposed into:
$ \mathcal{S} = \cap_{i = 1}^{l} {\mathcal{s}}_{i} = \cap_{i = 1}^{l} \left\{ x \mid {c}_{i}^{T} x \leq {d}_{i} \right\} $
Where $ {c}_{i} $ is the $ i $ -th row of $ C $.
Projection onto Half Space
In the above, each $ \mathcal{s}_{i} $ is an half space which the projection onto is known:
$ \begin{align*} \arg \min_{x} & \quad & \frac{1}{2} \left\| x - y \right\|_{2}^{2} \\ \text{subject to} & \quad & {c}^{T} x \leq d \end{align*} $
The solution is given by:
$ \hat{x} = y - \lambda c, \quad \lambda = \max \left\{ \frac{ {x}^{T} y - d }{ \left\| c \right\|_{2}^{2} }, 0 \right\} $
Now the solution becomes:
- Gradient Descent Step.
Project Solution onto the Inequality Set:
- Project onto the 1st Half Space.
- Project onto the 2nd Half Space.
- ...
- Project onto the k-th Half Space.
Unless converged go to (1).
The code for the Gradient Descent is given by:
mAA = mA.' * mA; vAb = mA.' * vB; vX = zeros([numCols, 1]); vObjVal(1) = hObjFun(vX); for ii = 2:numIterations vG = (mAA * vX) - vAb; %
The Projection (Alternating Projection):
function [ vX ] = ProjectOntoLinearInequality( vY, mC, vD, stopThr ) numConst = size(mC, 1); vCNorm = sum(mC .^ 2, 2); vX = vY; vRes = (mC * vX) - vD; maxRes = max(vRes); while(maxRes > stopThr) for ii = 1:numConst paramLambda = max(((mC(ii, :) * vX) - vD(ii)) / vCNorm(ii), 0); vX = vX - (paramLambda * mC(ii, :).'); end vRes = (mC * vX) - vD; maxRes = max(vRes); end end
The result:

The full code (Including validation against CVX) is available on my StackExchange Math Q73712 GitHub Repository.