12
$\begingroup$

I'm writing a program that needs to evaluate the function $f(x) = \frac{1 - e^{-ux}}{u}$ often with small values of $u$ (i.e. $u \ll x$). In the limit $u \to 0$ we have $f(x) = x$ using L'Hôpital's rule, so the function is well-behaved and non-singular in this limit, but evaluating this in the straightforward way using floating-point arithmetic causes problems when $u$ is small: then the exponential is quite close to 1, and when we subtract it from 1, the difference doesn't have great precision. Is there a way to rewrite this function so it can be computed more accurately both in the small-$u$ limit and for larger values of $u$ (i.e. $u \approx x$)?

Of course, I could add a test to my program that switches to computing just $f(x) = x$, or perhaps $f(x) = x - ux^2/2$ (second-order approximation) when $u$ is smaller than some threshold. But I'm curious to see if there's a way to do this without introducing a test.

5 Answers 5

9

We once had this (or at least something very similar) in my first numerical analysis class:

There it was proposed to calculate $f$ in terms of $y = \exp(-ux)$ instead. I.e. first set $y$ and then calculate $f(x) = \frac{1-e^{-ux}}{u} = -x\frac{1-y}{\log(y)}$

I have never understood why this should have any effect on the calculation, however... (It may only have an effect, if we make some assumptions on how the values get calculated inside the computer).

Edit: J.M. gave a very nice link in the comments, where it is explained why the above works!

If you make some numerical tests, let me know what you get (I'd be interested whether this actually does what they claimed it would)! ;)

  • 0
    @user, look at page 22 of the latest edition.2018-09-24
6

The reason you get error is due to the machine precision of floating point arithmetic.

What is happening is not so much that $1-e^{ux}$ has issues with machine precision, but rather that you are dividing by something that is quite small, as well.

Essentially, if $e^{ux}$ is less than approximately $2.2 \times 10^{-16}$ (the machine $\varepsilon$) for double-precision arithmetic, then the total quantity $1-e^{ux}$ will return exactly 1. However, this is not so much an issue... 1 times something is very nearly the same thing as 0.99999999999999... times something.

The problem is then dividing by a very small number: $1/u$. If your $u < 2.2\times 10^{-16}$, then $1/u$ is very large, and the numerator is unable to "cancel" this effect. Furthermore, and I don't recall what the OFL limit for doubles are off the top of my head, if $1/u$ is larger than something like $2^{32}$, you're out of luck anyways.

One thing that you can do is compute $\log f(x)$ first: $\log f(x) = \log(1-e^{ux})-\log(u)$. You will still have the machine epsilon issue (ie, small values of $u$ will essentially make the log return $0$, but this is going to be nearly true for values in a neighborhood anyways), but your denominator will be more manageable.

Edit: Of course, one other method is to use the trusty ol' Taylor expansion for $e^{ut}$. Then, you have $f(x) = \frac{1}{u}+\sum_i \frac{(ux)^i}{i!u}$. The numerator is small, and the denominator is small, so they will to some degree cancel each other out. Compute each term in the sum independently, store them, and then add.

  • 0
    One could also consider using Padé approximants instead of Taylor expansions.2012-07-18
3

As J.M. pointed out in a comment to another answer, some platforms and programming languages have a primitive that computes $2^x-1$ or $e^x-1$ in a single step with without precision loss for small $x$. For example, in C99 there is the expm1() function, which ought to map to the F2XM1 instruction on the x86 platform (via a scaling of $x$ by $\frac{1}{\log2}$).

  • 0
    I know using Taylor's series is one way to avoid the cancellation error for $e^x - 1$ Could you tell me if there is a way of rewriting the function that would avoid the error and also works for larger values of x?2017-10-03
2

Your problem is that there is no library function that preserves the precision of the argument to the exponential function in its result (relative to one) when the argument is in the neighborhood of zero. But you can write such a function yourself, albeit crudely:

public static double ExpMinusOne(double x) {     return Math.Abs(x) < 1e5 ? x + x * x / 2 + x * x * x / 6 : Math.Exp(x) - 1; } 

which should give you about ten digits of precision in $f(x)$ with IEEE floating point regardless of how small $u$ is. It works by using the normal exponential function when the argument is large enough and using the first four terms of the Taylor series expansion otherwise. Because the series converges rapidly when the argument is small you still get reasonable precision.

  • 0
    This is why a number of HP calculators have special functions for $e^x-1$ and $\ln(1+x)$ which give good accuracy for small x.2012-07-18
1

Still another possibility: if your computing environment of choice supports the hyperbolic sine, then you can also use the formula

$f(x)=\frac{2\exp\left(-\frac{u x}{2}\right)\sinh\left(\frac{u x}{2}\right)}{u}$

Assuming that the hyperbolic sine routine in your computing environment is well-written for very tiny arguments, the errors in computing the numerator and denominator should cancel out nicely.