A problem that often comes up is minimizing a function $f(x_1,\ldots,x_n)$ under a constraint $g(x_1\ldots,x_n)=0$. In general this problem is very hard. When $f$ is convex and $g$ is affine, there are well known algorithms to solve this. In many cases however, $g$ is not affine. For general $g$ this problem is hopelessly hard to solve, but what if the constraint is easy to solve on its own? In particular, suppose that if we are given $x_1,\ldots,x_{n-1}$, then Newtons method on the constraint $g(x_1,\ldots,x_n)=0$ can easily find $x_n$. Are there effective algorithms to solve the constrained optimization problem in that case?
To solve these kinds of problems I have tried to use Lagrange multipliers, and directly apply Newtons method to solve those (nonlinear) equations, but this does not converge. Something that does work is to add a penalty term for violating the constraint to the objective function, similar to how the barrier method handles inequalities. Unfortunately (but as expected) this is not very fast at getting an accurate answer.