5
$\begingroup$

The question is the following:

Is there any proof that shows that the Taylor series of an analytical function is the series with the fastest convergence to that function?

The motivation to this question comes from numerically calculate $\exp(x)$ with arbitrary precision on the result. Suppose one can only calculate it using simple multiplications, division, sum and subtraction.

One approach would be to calculate the Taylor series centered on a particular known value (for instance, for the $\exp$, centered at $0$), and stop when the next term of the series has the desired precision. I.e. considering

$y_n = \sum_{i=0}^n \frac{x^n}{n!}$

we can call the error of the approximation of $y_n$ as

$\epsilon_n = |y_n - e^x|\simeq \frac{x^{n+1}}{(n+1)!}$

It is not obvious to me that the Taylor series is the fastest way of approaching $\exp(x)$ (in the sense that the Taylor Series is the one that leads to the n required to achieve a given precision is the minimum).

I think the problem can also be stated in the following way: on the set of all series that converge to $e^x$, which converges faster in the sense that it requires the minimum number of terms (and only requires $+,-,\cdot,/$)?

Generically, I would like to extent this results to less trivial functions, like $\cos, \arcsin, \log$, etc. So, first I would like to understand which series (or other things like Padé approximants, as Cocopuffs pointed out) should I use...

  • 0
    Hmm, I'm looking for the same and the lack of answers here is disconcerting. By the way, "stop when the next term of the series has the desired precision" is not likely to work. In a monotonic sequence, the infinite number of small terms can add up to any error — you could be off by infinity. For an alternating sequence, the error is likely to be equal to the last term, which is still something of a worst case.2014-07-22

0 Answers 0