Newton's method of root-finding uses the derivative at $\tau$ to find the value of $x_i$ on the next iteration. But I have read that a function can still have a root if $f'(\tau) = 0$..is there something special about the root in this case? I can't see how Newton's method can work if the derivative is $0$ at $\tau$.
Newton's method and derivative is 0 at root?
3
$\begingroup$
numerical-methods
-
1It usually works, but very inefficiently (i.e., with very slow convergence). – 2012-10-11
-
0Is there something special about the root at $\tau$?? Does it have multiplicity > 1? – 2012-10-11
-
1Well, yes, if $f(\tau)=f'(\tau)=0$ (and $f$ is analytic at $\tau$) then by definition, the zero has multiplicity greater than 1. – 2012-10-11
-
0Try thinking about the [graphical interpretation](http://en.wikipedia.org/wiki/File:NewtonIteration_Ani.gif) of the situation for say $f(x)=x^2$ (for which you have $0=f(0)=f'(0)$). – 2012-10-11
-
0What theorem is that based on? – 2012-10-11
-
1Usually the multiplicity of a root of a sufficiently differentiable function is *defined* by how many of it and its derivatives vanish at a point - it isn't based off of anything else like an outside theorem. If you're talking about polynomials, then the compatibility of this definition can be derived by setting $p(x)=(x-p)^eg(x)$, where $g(p)\ne0$, and then iteratively differentiating and evaluating with standard rules. – 2012-10-11
1 Answers
4
To see how Newton's method typically handles a root of multiplicty $n$, look at the toy example $f(x)=x^n$: Then $$x_{k+1}=x_k-\frac{x_k^n}{nx_k^{n-1}}=\Bigl(1-\frac1n\Bigr)x_k,$$ so that $$x_k=\Bigl(1-\frac1n\Bigr)^k,$$ which converges much more slowly than the typical quadratic convergence of Newton's method when applied to a simple zero.