In a numerical analysis book I'm reading it says that using the Newton error formula we can find an expression for the number of correct digits in an approximation using Newton's Method.
Here's the derivation. Starting with the Newton error formula -
$|\alpha - x_{n+1}| = \frac{1}{2}(\alpha - x_n)^2\frac{f''(\epsilon_n)}{f'(x_n)}$
taking $\log_{10}$ of both sides -
$\log_{10}|\alpha - x_{n+1}| = 2\log_{10}|\alpha - x_n| + \log_{10}\left(\frac{f''(\epsilon_n)}{2f'(x_n)}\right) = 2\log_{10}|\alpha - x_n| + b_n$
It then states that $d_n = \log_{10}|\alpha - x_n|$ can be interpreted as the number of correct digits in the approximation (so long as the error is less than 1) and that the middle equation show that (ignoring the b_n) term this doubles each iteration.
Unfortunately statements using logarithms often don't seem naturally intuitive to me so can someone explain -
- Why $d_n = \log_{10}|\alpha - x_n|$ gives the number of correct decimal digits.
- How the middle equation show that this doubles on each iteration.