It's well known that you need to take care when writing a function to compute $\log(1+x)$ when $x$ is small. Because of floating point roundoff, $1+x$ may have less precision than $x$, which can translate to large relative error in computing $\log(1+x)$. In numerical libraries you often see a function log1p()
, which computes the log of one plus its argument, for this reason.
However, I was recently puzzled by this implementation of log1p
(commented source):
def log1p(x): y = 1 + x z = y - 1 if z == 0: return x else: return x * (log(y) / z)
Why is x * (log(y) / z)
a better approximation to $\log(1+x)$ than simply log(y)
?