2
$\begingroup$

There are many well-known methods for efficiently numerically computing $\pi$, such as Chudnovsky's Method or perhaps Gauss-Legendre's algorithm. I was wondering what the best method for computing $e$ is.

I have used the Taylor's series expansion or $e^x$ at $x=1$, which is in fact very convergent and does not cost too much to compute:

$$e=\sum_{n=0}^{\infty}\frac{1}{n!} = 1+1+\frac{1}{2}+\frac{1}{6}+\cdots$$

Here are a table of some basic values:

n                    Estimation            Error (e - sum) 1                    1.0                   1.718281828459045 5                    2.708333333333333     0.009948495125712054 10                   2.7182815255731922    3.0288585284310443E-7 100                  2.7182818284590455    -4.440892098500626E-16 

I am curious to know if there are even more efficient methods to compute $e$. Keep in mind that computational cost and convergence speed are the priorities.

  • 1
    See Finch's *Mathematical Constants* and http://numbers.computation.free.fr/Constants/E/e.html2012-04-12
  • 2
    $e$ has a very simple continued fraction, and there are good bounds on the error made by stopping at the $n$th partial quotient. Another (possible) advantage is that all calculations can be carried out in exact rational arithmetic with no need for floating point.2012-04-12
  • 2
    Your table is highly skewed by whatever errors appear inthe computation: a very basic use of the error term in the Taylor polynomial shows that for $n=100$ the approximation is better than $3/101!$, and this number is less than $10^{-159}$ (i.e. $159$ correct digits). Already $n=20$ gives you $20$ good digits, which is more than what your usual computer can handle2012-04-12
  • 0
    @MartinArgerami: Any modern computer can easily compute hundreds of digits of $e$. This is easy with bignum libraries, which do not have constraints the usual data types do (for example, integers are often limited to $2^{32}$ or $2^{64}$) See [here](http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic)2012-04-12
  • 2
    @Martin has hit the nail on the head. Your last error term is on the order of [machine epsilon](https://en.wikipedia.org/wiki/Machine_epsilon) for double-precision floating-point. If you check the values between 10 and 100, you should find that the error stops decreasing because the successive terms are too small to be represented in the usual `double` data type.2012-04-12
  • 1
    @m.k.: as you say, that's precisely when one uses appropriate software. The OP's table shows that he didn't use such a thing.2012-04-12
  • 0
    Also, one can hit a point of diminishing returns when one keeps summing past the supposed point of convergence, ending up with a few eroded digits in the last place...2012-04-14

1 Answers 1

2

A tiny C program from Xavier Gourdon to compute $9000$ decimal digits. (You may change that for more digits).

main(){     int N=9009,n=N,a[9009],x;     while(--n){         a[n]=1+1/n;     }     for(;N>9;printf("%d",x))     for(n=N--;--n;a[n]=x%n,x=10*a[n-1]+x/n); }