Suppose I have a function $f(x) : \mathbb R^n \to \mathbb R$ that is the sum of a given strictly convex function $g : \mathbb R \to \mathbb R$ in a single variable, i.e. $f(x) = g(x_1) + g(x_2) + \dots + g(x_n)$, where all the functions $g$ are identical.
Since $f$ is in turn strictly convex, its unique minimum can be found using any standard method for convex programming. However, because $f$ is separable, the minimum can also be found by determining the minimum of $g$ for each variable, repeatedly applying any method for single-variable root finding (e.g. bisection).
Now, I believe it is true that if I want to find the minimum of $f$ constrained to an axis-aligned "hyperrectangle" (or in other words, with given upper and lower bounds $u$ and $l$ on $x$), I can simply find the unconstrained minimum as above, and then project the minimum, component-wise, onto the rectangle (i.e., onto the intervals $[l_i, u_i]$).
My question is: what happens in the case of more general linear constraints (polytopes)? I believe that for general convex functions $f$, not much can be said. In particular, simply projecting the unconstrained minimum onto the polytope doesn't work in general (see a conjecture on norms and convex functions over polytopes).
However, when $f$ has the special form above, one natural guess would be that $l_1$ projection of the unconstrained minimum onto the polytope would provide the optimal constrained minimum. Is this actually true?