7
$\begingroup$

It's well known that you need to take care when writing a function to compute $\log(1+x)$ when $x$ is small. Because of floating point roundoff, $1+x$ may have less precision than $x$, which can translate to large relative error in computing $\log(1+x)$. In numerical libraries you often see a function log1p(), which computes the log of one plus its argument, for this reason.

However, I was recently puzzled by this implementation of log1p (commented source):

def log1p(x):     y = 1 + x     z = y - 1     if z == 0:         return x     else:         return x * (log(y) / z) 

Why is x * (log(y) / z) a better approximation to $\log(1+x)$ than simply log(y)?

  • 0
    [Here](http://math.stackexchange.com/q/172190) is a related question.2012-08-08

1 Answers 1

8

The answer involves a subtlety of floating point numbers. I'll use decimal floating point in this answer, but it applies equally well to binary floating point.

Consider decimal floating point arithmetic with four places of precision, and let

$x = 0.0001234$

Then under floating point addition $\oplus$, we have

$y = 1 \oplus x = 1.0001$ and $z = y \ominus 1 = 0.0001000$

If we now denote the lost precision in $x$ by $s = 0.0000234$, and the remaining part by $\bar{x}$, then we can write

$x = \bar{x} + s$ $y = 1 + \bar{x}$ $z = \bar{x}$

Now, the exact value of $\log(1+x)$ is

$\log(1+x) = \log(1+\bar{x}+s) = \bar{x}+s + O(\bar{x}^2) = 0.0001234$

If we compute $\log(y)$ then we get

$\log(1+\bar{x}) = \bar{x} + O(\bar{x}^2) = .0001000$

On the other hand, if we compute $x \times (\log(y)/z)$ then we get

$(\bar{x}+s) \otimes (\log(1+ \bar{x}) \div \bar{x}) = (\bar{x}+s)(\bar{x}\div \bar{x}) = \bar{x}+s = 0.0001234$

so we keep the digits of precision that would have been lost without this correction.

  • 0
    For a brief moment I was contemplating the wrong notion that simply returning $x$ in the `else` block would suffice, until I reminded myself that the `else` block has to work *not only* for $x$ that is small.2018-06-03