12
$\begingroup$

Related to this question and this Project Euler problem (Problem 99), I came up with a recursive algorithm for comparing two numbers of the form $x^y$ (with $x>1$ and $y\ge 0$) without explicit use of logarithms.

To compare $x_1^{y_1}$ or $x_2^{y_2}$ (resulting in "$\lt$", "$=$", or "$\gt$"):

  1. If $x_1=x_2$, compare $y_1$ to $y_2$ and return the result.
  2. If $x_1\gt x_2$, compare $x_2^{y_2}$ and $x_1^{y_1}$ and return the opposite result.
  3. (Now $x_1 \lt x_2$.) If $y_1\le y_2$, then return "$\lt$".
  4. (Now $y_1 \gt y_2$.) Compare $x_1^{y_1-y_2}$ to $\left(x_2/x_1\right)^{y_2}$ and return the result.

The last step is justified by noting that $$ x_1^{y_1} - x_2^{y_2} = x_1^{y_2}\left(x_1^{y_1-y_2}-(x_2/x_1)^{y_2}\right), $$ so the new comparison has the same result as the old one; and the algorithm converges, in the sense that the exponents and the bases decrease at each iteration (to $0$ and to $1$ respectively). My question is, what exactly is this algorithm doing, and how many iterations should I expect it to take to do it? It looks like Euclid's method for finding the greatest common divisor, a little. Is there a more precise correspondence, and are the iterates meaningful?

  • 2
    Have you tried it for any largish hard cases (where the two numbers are close)?2012-01-06
  • 0
    If the only condition is not to use logs explicitly, why cant you just take the difference and see which is greater? is the exponentiation an issue in such an approach?2012-01-06
  • 0
    @Nikhil: Yes, the issue is when the numbers are very large.2012-01-06
  • 0
    Just wondering, did you consider the case when x1>x2 and y12012-01-06
  • 0
    @Emmad: Yes, this is handled in step $2$ by recursion, which leads to step $4$ being executed in the recursive call.2012-01-06
  • 0
    @mjqxxxx: +1, a very nice idea. There is quite a precise correspondence, in that you can formulate Euclid's algorithm by splitting up each step $r_{k-2}=q_kr_{k-1}+r_k$ as $q_k$ subtractions of $r_{k-1}$ from $r_{k-2}$, and then that's just what you're doing with the exponents.2012-01-06
  • 0
    You can directly transfer this to comparing products using only addition and no multiplication. In a sense, the Euclidean algorithm determines how to efficiently simultaneously build up two numbers from scratch using only addition, and you're using this to efficiently simultaneously build up two powers using only multiplication, with the added twist that you can check whether you're done at each intermediate step.2012-01-06
  • 0
    @joriki, thanks, I was not clear on what the "return the opposite result" meant.2012-01-06
  • 0
    @joriki: I thought that might be the case, since you're effectively comparing the products $y_1 z_1$ and $y_2 z_2$ where $z_i = \log x_i$. And then the subtraction enters via $z_2 - z_1 = \log(x_2/x_1)$. The part I don't entirely get is how I'm getting away with not knowing the *value* of the logarithms, but only which one is bigger... or is that how Euclid's algorithm would work for comparing products, too?2012-01-07
  • 0
    @mjqxxxx: Yes, in comparing products it only subtracts and compares and doesn't need to divide; in your case you only divide and compare and don't need to take logarithms.2012-01-07

1 Answers 1

3

This is a bit long for a comment, so I'll start turning it into an answer, though there may well be more to say e.g. about the meaning of the iterates.

I find it slightly easier to think about the corresponding algorithm for comparing products without multiplication; as discussed in the comments, this is directly analogous. Let's see how this plays out in an example (assuming that none of the intermediate steps yield a successful comparison):

$$ \begin{eqnarray} 54a&\lessgtr&21b\\ 33a&\lessgtr&21(b-a)\\ 12a&\lessgtr&21(b-2a)\\ 12(3a-b)&\lessgtr&9(b-2a)\\ 3(3a-b)&\lessgtr&9(2b-5a)\\ 3(8a-3b)&\lessgtr&6(2b-5a)\\ 3(13a-5b)&\lessgtr&3(2b-5a)\\ \end{eqnarray}$$

The algorithm ends with $\gcd(54,21)=3$ and a direct comparison of the numbers $13a-5b$ and $2b-5a$, which is the same as comparing $18a$ and $7b$, the original comparison reduced by the gcd.

The efficiency of the Euclidean algorithm is usually analyzed counting each quotient and remainder computation as one step; in that case the worst case is two consecutive Fibonacci numbers, so the algorithm takes about $\log n/\log\phi$ steps. However, if you perform individual subtractions instead, this may take linear time, the worst case in this scenario being if one of the exponents is $1$ and you're merely very inefficiently multiplying out the power on the other side one factor at a time. At first I thought your algorithm might be an efficient way to organize exponentiation by squaring, but that example shows that it's more somewhat complementary to that. Indeed you could combine the two and make the complexity logarithmic by doing the intermediate steps using repeated squaring.

Two more comments: The bases never influence the control flow, except by ending it; the sequence of computations is entirely determined by the Euclidean algorithm being applied to the exponents. And if you're looking for precision and trying to compare numbers close to each other, I don't think you're gaining much other than reducing by the gcd, since you're explicitly calculating ratios of high powers of the bases, whose quotient is (the gcd-th root of) the quotient of the numbers being compared.