I want it to be stable near $f(0) = 1$. Is there a nice function that does this already, like maybe a hyperbolic trig function or something like expm1, or should I just check if $x$ is near zero and then use a polynomial approximation?
What is a nice way to compute $f(x) = x / (\exp(x) - 1)$?
-
1Once you use `expm1` to compute $\exp(x)-1$, there's no further loss of significance in dividing $x$ by it, is there? – 2012-08-21
-
1@Rahul: You are suggesting to use x/expm1(x) but with a single extra check for 0/0 when x is exactly 0? – 2012-08-21
-
0Suggesting, yes, (though to be clear, not with a great deal of authority). – 2012-08-21
-
1Why not Bernoulli numbers? – 2012-08-22
-
0$$f(x)={e^{-x/2}\over{\rm sinc}(i x/2)}\ .$$ – 2012-08-22
5 Answers
Consider the Bernoulli numbers, defined by the recursive formula:
$$B_0=1$$
$$\sum_{k This gives the sequence: $$\{B_n\}_{n\in \Bbb N}=\left\{ 1,-\frac 1 2,\frac 1 6 ,0,\frac 1 {30},0,\dots\right\}$$ It's generating function is $$ \sum_{n=0}^\infty B_n \frac{x^n}{n!}=\frac{x}{e^x-1}$$ It's first few terms are $$1-\frac x 2 +\frac {x^2}{12}-\frac{x^4}{720}+\cdots$$ The numbers' denominators grow pretty fast, so you should have no problem with convergence: in fact, the function is essentialy $=-x$ for large negative $x$ and $=0$ for large positive $x$, so a few terms should suffice the "near origin" approximation.
-
0This is pretty cool and in fact it's what I meant by "check if x is near zero and then use a polynomial approximation." – 2012-08-22
-
0@kxx, you might even consider building a Padé approximant from the series expansion Peter has shown you. For instance, the series expansion of $$\frac{(x-4)^2+4}{\frac{(x+3)^2}{3}+17}$$ matches the original function up to the fourth series term. – 2012-08-22
-
0Dear Pedro - Please forgive this if it's a dopy question. Would you please show how to derive your recursive formula - $\sum_{k
– 2015-06-10
If you don't want to use the expm1() function for some reason, one possibility, detailed in Higham's book, is to let $u=\exp\,x$ and then compute $\log\,u/(u-1)$. The trick is attributed to Velvel Kahan.
-
0I am totally OK with using expm1(). – 2012-08-21
-
0Does this give any better precision? $u-1=\exp(x)-1$ would seem to lose precision near $x=0$. The computation of $\exp(x)$ is the place where precision is lost. Am I missing something? – 2012-08-22
-
1@robjohn: Neither $u-1$ nor $\log u$ is very accurate for $ u \sim 1$. But if you follow the link, it asserts the errors compensate for each other. – 2012-08-22
-
0@rob, yes, there is an example there in Higham's book that shows how the effect of the errors in computing the numerator and denominator cancel out. I'll try to find the original ref by Kahan, and will link to it in the answer if I manage to do find the ref. – 2012-08-22
-
0@J.M.: Yes, I believe this shows it: $$\frac{\log(\exp(x)+\delta)}{\exp(x)+\delta-1} -\frac{\log(\exp(x)}{\exp(x)-1}\sim\frac12\delta$$ for small $x$ and $\delta$. Very nice. – 2012-08-22
If your system provides expm1(x), they should have worried about errors and stability at least as well as you can. It is a better question if that is not available. Wolfram Alpha gives $1-\frac {x}2+\frac {x^2}{12}$ for the second order series around $0$, so you could check if $x$ is close to zero and use that.
-
0My system provides expm1. So you are seconding Rahul's suggestion to use x/expm1(x) with the check for x exactly equal to zero? – 2012-08-21
-
0@kxx: that's what I would do. You could try some cases, comparing against Alpha (which will give arbitrary precision), for confirmation. – 2012-08-21
You mention using hyperbolic functions. You might try $$ \frac{x}{\exp(x)-1}=\frac{x/2}{\exp(x/2)\sinh(x/2)} $$ This loses no precision if the $\sinh$ is computed to full precision by the underlying system.
Note that $\mathrm{expm1}(x)=2\exp(x/2)\sinh(x/2)$.
Example: 15 digit calculations
$x=.00001415926535897932$: $$ \begin{align} \frac{x}{\exp(x)-1} &=\frac{.00001415926535897932}{1.000014159365602-1}\\ &=\color{#C00000}{.9999929203}73447 \end{align} $$ $$ \begin{align} \frac{x/2}{\exp(x/2)\sinh(x/2)} &=\frac{.00000707963267948966}{1.000005000012500\cdot.00000707963267954879}\\ &=\color{#C00000}{.999992920384028} \end{align} $$
You can construct Lagrange interpolation polynomial, of degree $n$, given $n$ a positive integer. You should first extract values of $f (x) = \frac {x} {e^x - 1}$ at any $n + 1$ points you like, say, $x_0, x_1, \cdots, x_n$. Then $$L (x) := \sum_{0 \leqslant k \leqslant n} f (x_k) \prod_{0 \leqslant j \leqslant n} \frac {x - x_j} {x_k - x_j},$$ where $j \neq k$, is the Lagrange polynomial that interpolates $f (x)$ at the points $$(x_0, f (x_0)), (x_1, f (x_1)), \cdots, (x_n, f (x_n)).$$
But even if Lagrange polynomial is a very good approximation of the function that it interpolates, its analytic behaviour at turn points may radically differ from that of the original function. That is why, you may want to interpolate $f (x)$ with such a polynomial, that not only its values at given points overlap with those of the function, but also the values of any of its derivatives at the given points overlap with those of $f (x)$. Such is Taylor polynomial, i.e., Taylor series that is truncated at any given place (in our case, after the $(n + 1)$st term): $$\sum_{0 \leqslant k \leqslant n} \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}.$$
My suggestion may look rather naive to the people here, but interpolation polynomials have long been used in computer algebra systems classically.