5
$\begingroup$

I have some trouble solving a problem in my textbook:

Given the following function: $f(x) = x^{-1} - R$ Assume $R > 0$. Write a short algorithm to find $1/R$ by Newton's method applied to $f$. Do not use division or exponentiation in the algorithm. For positive $R$, what starting points are suitable?

OK, so I've managed to solve the first part of the problem. Using Newton's method and rearranging terms I have gotten:

$x_{n+1} = x_{n}(2 - Rx_{n})$

This is correct according to the book, and I can just use my standard algorithm for Newton's method with this expression. So far so good.

The problem is the second part, where I am supposed to find suitable starting points. I figured that if $x_{1} = -x_{0}$, then the iterations cycle. So then I get:

$\begin{align*} -x_{0} &= x_{0}(2 - Rx_{0})\\ -3x_{0} &= - Rx_{0}^2\\ -3 &= -Rx_{0}\\ x_{0} &= 3/R \end{align*}$

Thus my answer would be that we must have $x_{0} < 3/R$. My book, however, says:

If $R > 1$ then the starting point $x_{0}$ should be close to $0$ but smaller than $1/R$.

So what is wrong with my reasoning here? If anyone can help me out, I would really appreciate it!

  • 0
    See the discussion [here](http://books.google.com/books?hl=en&id=Mp8-z5mHptcC&pg=PA104) as well.2012-08-05

4 Answers 4

2

The function $f(x) = x^{-1}$ is monotonically decreasing for $x > 0$. Newton's method works by following a tangent line of the function at a certain point to the x-axis, computing the zero-crossing of that tangent line, and repeating the process at that new value.

What happens if you pick a large initial value for $x_0$, say $x_0 > 1/R$? Draw that tangent line, and follow it back up to the $x$-axis. It may not cross the $x$-axis at any point $x > 0$. That puts you in a whole different region of the function $1/x$, and you may not ever converge to your root.

  • 0
    In other words- how can I prove that $x_0$ must be less than $1/R$?2012-08-04
5

I presume the restriction on division applies to choosing a starting point as well?

The function $f$ is strictly decreasing on the domain $(0,\infty)$, $\lim_{x \downarrow 0} f(x) = \infty$, $\lim_{x\to \infty} f(x) = -R$, hence there exists a unique $x^*$ such that $f(x^*) = 0$. The picture below shows the behavior for $R=10$.

enter image description here

Furthermore, $f$ is strictly convex on its domain, which means, in this case, if an iteration $x_n$ satisfies $x_n < x^*$, then it is easy to show that $x_n < x_{n+1} \leq x^*$. Furthermore, if $x_n>x^*$, and if $x_{n+1}$ lies in the domain of $f$, then $x_{n+1} \leq x^*$.

So, the only real restriction on a starting point is to ensure that $x_1 \in (0, \infty)$ so that subsequent iterations are well defined. In this case, you have $x_1 = x_0(2-R x_0)$, giving $x_1>0$ iff $x_0 < \frac{2}{R}$.

So the answer is that Newton's method will converge iff you start with $0 < x_0 < \frac{2}{R}$, or, since you are not allowed division, choose $x_0>0$ such that $R x_0 < 2$.

It is instructive to look at the Newton iteration itself. In this case, $\phi(x) = x(2-Rx)$ defines the iteration scheme (ie, $\phi_{n+1} = \phi(x_n)$). We know the solution is a fixed point of $\phi$, that is $\phi(\frac{1}{R}) = \frac{1}{R}$, which gives $\phi(x) - \phi(\frac{1}{R}) = - R (x-\frac{1}{R})^2$. So we have $|\phi(x) - \phi(\frac{1}{R})| = (R |x-\frac{1}{R}|) |x-\frac{1}{R}|$. This is a contraction whenever $x \in (0, \frac{2}{R})$. However, as $x$ gets closer to $\frac{1}{R}$, the 'error' term (ie, distance between $x_n$ and the solution $\frac{1}{R}$) drops with the square of the previous error (ignoring the $R$ for simplicity), which gives Newton's method its so called quadratic convergence rate.

  • 0
    Great! Thanks a lot. This was an excellent explanation! Much appreciated!2012-08-04
4

The points you can converge to are where

$x=x(2-Rx)$ $x=0\ \text{ or }\ x=\frac 1 R$

What you want are points where applying the function will get you closer to $\frac 1 R$ than you were previously. So what you want is where:

$|f(x+\epsilon)-f(x)|<|\epsilon|$

So if you move $\epsilon$ away from $x$ (here $x=\frac 1 R$), applying the function takes you closer. For this to work in a stable way, it should work for arbitrarily small $\epsilon$. Applying this to your recurrence relation gives:

$f(x)=x(2-Rx)$ $(x+\epsilon)(2-Rx-R\epsilon)-x(2-Rx)<\epsilon$

$2\epsilon-2Rx\epsilon-R\epsilon^2<\epsilon$ $2-2Rx-R\epsilon<1,\ \ \epsilon>0$ $2-2Rx-R\epsilon>1, \epsilon<0$ plugging in $x=\frac 1 R$, we get that $|\epsilon|<\frac 1 R$

So points within $(0,\frac 2 R)$ will converge to $\frac 1 R$ as intended. From what I can tell (I checked a few values of $R$ on a computer) this seems to work.

  • 0
    Excellent proof! Thank you very much. Help has been really outstanding :).2012-08-04
3

It works also for $x_0 > 1/R$, but maybe convergence isn't the fastest. Look what happens for various $x-0$ in Newtons iteration, as Ed suggested.

  1. For $x_0 < 0$ solution diverges. Of course: $x_0(2-Rx_0) < x_0$.
  2. For $x_n \geq 2/R$ you get $x_n < 0$ and solution diverges.

(tangent will always cross x-axis, unless if $f'(x) = 0$ which is only true for $x \rightarrow \infty$)

Now, insert $x_0 = 1/R + \Delta x, \Delta x > 0$: $ (1/R + \Delta x)(2 - R(1/R+\Delta x)) $ $ 1/R (1+R\Delta x)(1-R\Delta x) = 1/R - \Delta x^2 R $ Notice, that you get the same result if $x_0 = 1/R - \Delta x$. This proofs, that speed of convergence is the same for $1/R + \Delta x$ and $1/R - \Delta x$, because $x_1$ are the same. Also inserting $\Delta x > \pm 1/R$ gives you divergence.

Maybe there are reasons why should be $x_0 < 1/R$ (for instance speed of convergence). You can experiment with following code in Mathematica or Wolfram Alpha (insert numbers instead of R, x_0, max_n):

NestList[N [# (2 - R*#)] &, x_0, max_n] 
  • 0
    I'd use `FixedPoin$t$Lis$t$[]` in$s$$t$ead of `Nes$t$List[]` myself...2012-08-05