I want it to be stable near $f(0) = 1$. Is there a nice function that does this already, like maybe a hyperbolic trig function or something like expm1, or should I just check if $x$ is near zero and then use a polynomial approximation?
What is a nice way to compute $f(x) = x / (\exp(x) - 1)$?
-
0$f(x)={e^{-x/2}\over{\rm sinc}(i x/2)}\ .$ – 2012-08-22
5 Answers
Consider the Bernoulli numbers, defined by the recursive formula:
$B_0=1$
$\sum_{k
This gives the sequence:
$\{B_n\}_{n\in \Bbb N}=\left\{ 1,-\frac 1 2,\frac 1 6 ,0,\frac 1 {30},0,\dots\right\}$
It's generating function is
$ \sum_{n=0}^\infty B_n \frac{x^n}{n!}=\frac{x}{e^x-1}$
It's first few terms are
$1-\frac x 2 +\frac {x^2}{12}-\frac{x^4}{720}+\cdots$
The numbers' denominators grow pretty fast, so you should have no problem with convergence: in fact, the function is essentialy $=-x$ for large negative $x$ and $=0$ for large positive $x$, so a few terms should suffice the "near origin" approximation.
-
0Dear Pedro - Please forgive this if it's a dopy question. Would you please show how to derive your recursive formula - \sum_{k
from the generating function relation, which is my typical starting point. Thanks. Regards, – 2015-06-10
If you don't want to use the expm1()
function for some reason, one possibility, detailed in Higham's book, is to let $u=\exp\,x$ and then compute $\log\,u/(u-1)$. The trick is attributed to Velvel Kahan.
-
0@J.M.: Yes, I believe this shows it: $\frac{\log(\exp(x)+\delta)}{\exp(x)+\delta-1} -\frac{\log(\exp(x)}{\exp(x)-1}\sim\frac12\delta$ for small $x$ and $\delta$. Very nice. – 2012-08-22
You mention using hyperbolic functions. You might try $ \frac{x}{\exp(x)-1}=\frac{x/2}{\exp(x/2)\sinh(x/2)} $ This loses no precision if the $\sinh$ is computed to full precision by the underlying system.
Note that $\mathrm{expm1}(x)=2\exp(x/2)\sinh(x/2)$.
Example: 15 digit calculations
$x=.00001415926535897932$: $ \begin{align} \frac{x}{\exp(x)-1} &=\frac{.00001415926535897932}{1.000014159365602-1}\\ &=\color{#C00000}{.9999929203}73447 \end{align} $ $ \begin{align} \frac{x/2}{\exp(x/2)\sinh(x/2)} &=\frac{.00000707963267948966}{1.000005000012500\cdot.00000707963267954879}\\ &=\color{#C00000}{.999992920384028} \end{align} $
If your system provides expm1(x)
, they should have worried about errors and stability at least as well as you can. It is a better question if that is not available. Wolfram Alpha gives $1-\frac {x}2+\frac {x^2}{12}$ for the second order series around $0$, so you could check if $x$ is close to zero and use that.
-
0@kxx: that's what I would do. You could try some cases, comparing against Alpha (which will give arbitrary precision), for confirmation. – 2012-08-21
You can construct Lagrange interpolation polynomial, of degree $n$, given $n$ a positive integer. You should first extract values of $f (x) = \frac {x} {e^x - 1}$ at any $n + 1$ points you like, say, $x_0, x_1, \cdots, x_n$. Then $L (x) := \sum_{0 \leqslant k \leqslant n} f (x_k) \prod_{0 \leqslant j \leqslant n} \frac {x - x_j} {x_k - x_j},$ where $j \neq k$, is the Lagrange polynomial that interpolates $f (x)$ at the points $(x_0, f (x_0)), (x_1, f (x_1)), \cdots, (x_n, f (x_n)).$
But even if Lagrange polynomial is a very good approximation of the function that it interpolates, its analytic behaviour at turn points may radically differ from that of the original function. That is why, you may want to interpolate $f (x)$ with such a polynomial, that not only its values at given points overlap with those of the function, but also the values of any of its derivatives at the given points overlap with those of $f (x)$. Such is Taylor polynomial, i.e., Taylor series that is truncated at any given place (in our case, after the $(n + 1)$st term): $\sum_{0 \leqslant k \leqslant n} \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}.$
My suggestion may look rather naive to the people here, but interpolation polynomials have long been used in computer algebra systems classically.