This is taken from Nocedal & Wright's Numerical Optimization 2nd edition, pg 279 Theorem 11.3:
Suppose $r:R^n\rightarrow R^n$ is continuously differentiable in a convex set $D\subset R^n$. Let $x^*\in D$ be a nondegenerate solution of the equation $r(x)=0$. I understand that since $r(x^*)=0$:
$r(x)=J(x)(x-x^*)+\int_0^1[J(x+t(x^*-x))-J(x)](x-x^*) \, dt$
According to the proof which I'm following, it states that for some ball $B(x^*,\delta)$ centered at $x^*$, we can obtain the following bound:
$||r(x)||\le 2||J(x^*)||\cdot||(x-x^*)|| + o(||x-x^*||)$.
I'm trying to derive this error estimate, but I'm struggling with arriving at these two terms.
1. I can easily see that
$||\int_0^1[J(x+t(x^*-x))-J(x)](x-x^*) \, dt||\le ||x-x^*||\int_0^1||J(x+t(x^*-x))-J(x)|| \, dt$
However, I can't quite see how it (immediately?) follows that this is $o(||x-x^*||)$.
2. If I were to apply a triangle inequality to the expression for $||r(x)||$, I would obtain a term of $||J(x)(x-x^*)||$, which I can bound as $||J(x)(x-x^*)||\le||J(x)||\cdot||x-x^*||$, not as $||J(x^*)||\cdot||(x-x^*)||$. In other words, I can't see how I can bound this by the jacobian evaluated at the root $x^*$.
Any help with this would be greatly appreciated! :)