1
$\begingroup$

How can we prove that regula falsi method has linear rate of convergence?

I know how to do so for the secant method but I am unable to derive it for regula falsi.

Any help is much appreciated. Thank you.

  • 0
    Assume that the function looks locally like $cx(x+d)$, $c,d>0$, and show that the function value at the mid-point is always negative, thus leaving the right point unchanged.2017-01-16
  • 0
    @LutzL:can you explain it in detail?2017-01-16

2 Answers 2

1

Assume first that the function has a root in $x=0$ and looks locally around $x=0$ like $f(x)=cx(x+d)$ with $c,d>0$ and the bracketing interval $[a,b]$ satisfying $-d

Now show that the function value at the mid-point $$ m=\frac{af(b)-bf(a)}{f(b)-f(a)}=\frac{ab·c(b-a)}{c(b-a)(b+a+d)}=\frac{ab}{a+b+d}<0 $$ is always negative (or show $m=a\frac{b}{b+(a+d)}>a>-d$), $$ f(m)=c·\frac{ab}{a+b+d}·\frac{ab+da+db+d^2}{a+b+d}=c·\frac{ab(a+d)(b+d)}{((a+d)+b)^2}<0, $$ as $a+d>0$. This means that the midpoint $m$ is always used to replace the left point $a$. For sufficiently small $a$, $$a_+=m\approx\beta a$$ with $\beta=\frac{b}{b+d}$ which establishes the linear convergence.

As $d$ is determined by the curvature of the function $f$, the convergence speed depends only on $b$, the farther $b$ is from $0$, the closer $β$ is to $1$, thus the slower the convergence. This also tells that any measure, no matter how crude, that decreases $b$ will substantially increase the speed of convergence.


To translate this to the more general case, compare the quadratic approximation $$cx(x+d)=cdx+cx^2$$ with the quadratic Taylor polynomial $$f(x^*+x)=f(x^*)+f'(x^*)x+\frac12f''(x^2)x^2+o(x^2)$$ in the root $f(x^*)=0$.

We find $c=2f''(x^*)$ and $cd=f'(x^*)$, which of course only makes sense if both quantities are different from zero. Apply reflections on the $x$ and $y$ axes to get both derivatives and thus both of $c$ and $d$ to be positive.

  • 0
    can we show the rate of convergence by error equation?2017-01-16
  • 0
    @LutzL t think we should use approximation $f(x)$ as a straight line (a chord ) [a Linear function of $x$ ],and the difference between Secant and Regula Falsi method is in Regula Falsi method we just check the product of function values is $< 0$.2017-01-16
  • 0
    @BAYMAX : You do that in the method. However, in analyzing the behaviour of the method your function approximation has to be at least one degree higher.2017-01-16
  • 0
    @LutzL is it a theorem ? if so kindly tell which one ..can you explain through error method ?2017-01-16
  • 0
    This is not yet a theorem, more a general explanation for a widely observable behavior of the method. One would have to tighten some screws to make a theorem of it. The general claim would be that if $f(x^*)=0$ and $f'(x^*)\ne0$, $f''(x^*)\ne0$ then stalling, i.e., linear convergence in this pattern will occur at some point.2017-01-16
1

In Comparative Analysis of Convergence of Various Numerical Methods by Robin Kumar and Vipan, the authors derive the convergence rate of some common numerical methods for solving equations.

In the specific case of the Regula-Falsi method, the authors assume the function is convex on an interval $]x_0,x_1[$ that contains the root, which implies one of the extreme points remains fixed. Then the linear convergence is derived in a similar way as it was in the secant method.

There are other references to derive the rate of convergence. You may try this question or this question and their associated references if you want other approaches.