5
$\begingroup$

I had a look at Knuth's The Art of Computer Programming, book 1. In chapter 1, section 1.2.2, exercise 25, he presents the following algorithm for calculating logarithm:

given $x\in[1,2)$, do the following:

L1: [Initialize] Set $y\leftarrow0$, $z\leftarrow x/2$, $k\leftarrow1$.

L2: [Test for end] If $x=1$, stop.

L3: [Compare] If $x-z<1$, go to L5.

L4: [Reduce values] Set $x\leftarrow x-z$, $z\leftarrow x/2^k$, $y\leftarrow y+\log_b(2^k/(2^k-1))$, and go to L2.

L5: [Shift] Set $z\leftarrow z/2$, $k\leftarrow k+1$, and go to L2.

(The algorithm needs an auxiliary table which stores $\log_b2$, $\log_b(4/3)$, and so on, to as many values as the precision of the computer.

Then Knuth concludes: this exercise is to explain why the above algorithm will terminate and why it computes an approximation of $y=\log_b x$.

Well, I see that it works, but I am not able to explain why...

  • 0
    Yes, $y$ is the result. I'll make it clear.2011-09-03

1 Answers 1

6

The idea is that if we can express $x$ as a product $x=\prod a_i$, where the $1, then $\log x = \sum \log a_i$, and if the $a_i$ decrease fast enough, then we can get a good approximation by taking partial sums, assuming that we have precomputed $\log a_i$.

Obviously, with such an algorithm, we very often will NOT terminate, and so we will have to stop when we are good enough. This is because if the $a_i$ are chosen from some countable set $B=\{b_i\}$, then only countably many numbers will have expressions as products of only finitely many $b_i$ (using repetitions). So step $L2$ needs to be modified to a condition like$|x-1|<\epsilon$, for some $\epsilon$ determining the accuracy.

So what does the algorithm do? At each stage, it starts with $x$, and finds the smallest $k$ such that $x-z=x(1-2^{-k})>1$, which is equivalent to $x>(1-2^{-k})^{-1}=\frac{2^k}{2^k-1}=a_i$. We then replace $x$ with $x/a_i$ and continue recursively, knowing that $\log x = \log a_i + \log x/a_i$.

In the end, we know that we will be off my a (multiplicative) factor of no more than $\log(1+\epsilon)\approx \epsilon$ (if $\epsilon$ is small, and we can use Taylor series to put explicit bounds on $\epsilon$ to make the error in this approximation as small as desired), and because of our assumption that $1\leq x <2$, this translates to an absolute error of at most $\log 2 \log (1+\epsilon) < 2\epsilon$.

  • 0
    @Aaron Also, I think you should review your upper bound for the error made by the algorithm: you say that it is $\log2\log(1+\epsilon)$, but first I don't see how you came up with this simply starting from the fact that 1 \leq x < 2, and, most importantly, I don't think I would disagree with my computer which tells me, with tons of tests, that the relation \log x-\log'x < \log(1+\epsilon) is valid, while yours is not. By the way, keep in mind that I'm not still completely sure that $\log(1+\epsilon)$ is the actual upper bound of the error; I only see that the error is less than such value2014-06-14