0
$\begingroup$

Improving $$x_{n+1}=g(x_n)=x_n-\frac{f(x_n)}{φ(x_n)} $$ we create Aitken's method

$$ x_n*=\frac{x_{n+2}x_n-x_{n+1}}{x_{n+2}+x_n-2x_{n+1}}$$

But why does it converge faster than the initial method?

  • 0
    Let $\alpha$ be the solution of $f(x) = 0$. The transformation applied to $x_n$ is such that... if $\lim_{n\rightarrow+\infty} \frac{|x_{n+1} - \alpha|}{|x_n - \alpha|} = \mu \neq 0$ you have that $\lim_{n\rightarrow+\infty}\frac{|x_n* - \alpha|}{|x_n - \alpha|} = 0$. Which implies that $x_n*$ converge to $\alpha$ faster then the original sequence.2017-02-10
  • 0
    The order of convergence for the iterative methods to solve the equations are expressed in terms $\lim_{n\rightarrow\infty} \frac{|x_{n+1} - \alpha|}{|x_n - \alpha|^p}$ so the fact that for aitken the limits is 0 necessarily implies that the parameter $p>1$, so it converge faster.2017-02-10
  • 0
    I don't quite get it. Why is Aitkens limit 0?2017-02-10
  • 0
    I think you may be missing a square in the denominator of the Aitken formula.2017-02-10
  • 0
    @JohnKatsantas I've added an extended comment as answer. But anyway the Aitken method produce a faster sequence if the original one converge, while under some hypothesis it can transform an original non converging sequence to a converging one. Whether the Aitken method is useful or not it depends from the specific application.2017-02-10

1 Answers 1

0

Your specific equation (of which I think you missing a square) is derived from

$$z_{{n+1}} = x_n - \frac{\left(\Delta x_n\right)^2}{\Delta^2 x_n} = G(z_n) \Rightarrow z_{n+1} - G(z_n) = \frac{\left(\Delta x_n\right)^2}{\Delta^2 x_n}.$$

In terms of the original function $g(z)$ you want to solve you can write

$$z - G(z) = \frac{\left(g(z) - z\right)^2}{g(g(z)) - 2g(z) + z}.$$

From the study of the difference $z - G(z)$ you can derive what are you looking for. There are two results

  1. Let $g(x)$ be in $C^1[a,b]$; if $g'(\alpha) \neq 1$, the function $G(z)$ extended posing $G(\alpha) = \alpha$ is continuous. (The proof follows from a straight application of the L'Hospital rule.
  2. If $g(x) \in C^2[a,b]$, $g'(\alpha) \neq 0$ and $g'(\alpha) \neq 1$ there's a neighbour $U$ of $\alpha$ such that for each $z_0 \in U$ the sequence $z_i$ converges with order at least 2. (The proof follows from a straight Taylor expansion in $\alpha$).

Both proofs, at least the way I know to prove both, are mostly calculation involved but not difficult.

The order $p$ of convergence is expressed in terms of

$$ \lim_{n\rightarrow + \infty} \frac{|x_{n+1} - \alpha|}{|x_n - \alpha|^p} $$