1
$\begingroup$

Given some crude approximation of $exp(x)$, how to improve its precision?

I could compute $exp(x)$ with, for example, Taylor series from scratch but it is not clear to me, how to take advantage of the approximation I already have.

  • 0
    exp(x) is computed via taylor series in computers as far as i know. The longer the truncated sum is, the closer you get, and you control the error with taylor inequalities2017-02-20
  • 0
    @marmouset: This is almost surely wrong as the Taylor series does not provide an uniform error. Of course, polynomials with uniform error over some interval containing $0$ will have coefficients that are close to the Taylor coefficients. But esp.for the exponential function some symmetric Pade approximation or a close perturbation with uniform error will be more natural.2017-02-20
  • 0
    I assume the use case involves a full-range, low-precision, approximation provided by hardware (e.g. GPU, AVX) that you want to refine to a desired target precision. This is possible, but may not be practical, as it requires a sufficiently accurate logarithm-type operation. From my notes (sorry, no reference, and not tested): $r_{n+1}=\frac{1}{2} r_{n} (1+{(a+1-log(r_{n}))}^{2})$ to approximate $\exp(a)$, where $r_{0}$ is the initial estimate.2017-02-21

2 Answers 2

1

You can use the exponential properties $$ \exp(x)=\exp(x/n)^n $$ esp. for dyadic powers in $n$, and $$ \exp(x)=2^n·\exp(x-n\ln(2)) $$ to reduce the size of the argument and thus hopefully get better results.

  • 0
    Thanks! I'm vaguely aware of the fact that reducing argument size improves the Taylor series convergence. I'm question is slightly different, though: I would like to skip the first few steps, as I already have an approximation. In other words, assume that the argument is small enough and I have and approximation which I would like to refine.2017-02-20
  • 0
    That will not work. The only way to get closer to `exp(x)` from `x` and `y=exp0(x)` is to throw away `y` and compute a better approximation from `x`. There is no equation that can be used for a Newton or secant method. I assumed that the coarse approximation is accessible for any argument?2017-02-20
  • 0
    Yes, for any argument. Isn't there some other method besides Newton? Continued fractions, CORDIC, Pade approximant, ..?2017-02-20
  • 0
    Yes, but they all compute a different approximation, not one based on the already existing one.2017-02-20
  • 0
    True, and my question is whether there is another method, besides the listed one, which is able to take advantage of a crude approximation? Are you saying that it is impossible for such method to exist?2017-02-20
  • 0
    No, I say that `exp0(x/2^n)^(2^n)` is a better approximation if `n` is chosen depending on the size of `x`. Then the relative error is `2^n*err(x/2^n)` which if the error of `exp0` is `err(x)=C*x^m+...` result in a smaller error `C*x^m/2^(n*(m-1))`+....2017-02-20
1

This can be done if for some $r>0$ you have an approximation to $\exp(x)$ in some functional form that is valid for all $x\in [0,r]$. You can then use that $y(x) = \exp(x)$ is a solution to the differential equation:

$$y' = y$$

with $y(0) = 1$. It then follows that:

$$y(x) = 1 + \int_0^{x} y(t) dt$$

Suppose that instead of $y(x) = \exp(x)$ we substitute some arbitrary function $f(x)$ that satisfies the boundary condition $f(0) = 1$ on the right hand side of the equation. Then the left hand side won't be equal to $f(x)$, it will be some other function $g(x)$. It can be shown that in some sense $g(x)$ will be a better approximation to $\exp(x)$ than $f(x)$, this means that iterating this integral equation will lead to a sequence of functions that converges to $\exp(x)$. This is true for general linear differential equations and it is used to prove that solutions that satisfy the boundary conditions are unique.