0
$\begingroup$

A loop goes thru all numbers from one to N to find perfect numbers. For each number in the range, it checks all numbers less than it to see if it's a divisor by modding it by the number and checking if the result is zero. It keeps track of the running total of the divisors, and stops if the running total gets higher than the number being checked. What's the complexity of this? I think it's something exponential...Here's the C++ code I wrote to do it:

for(int i = 0; i < nums_to_check.size(); i++) {     number = nums_to_check[i];     running_total = 0;     for(unsigned long long j = number-1; j > 0; j--)     {         if(number % j == 0)         {             running_total += j;             if(running_total > number)                 break;         }     }      if(running_total == number)         perfect_numbers.push_back(number); } 
  • 0
    @marcvanleeuwen i'm aware that a brute force approach is ridiculous, but it's for an assignment that isn't about the perfect numbers. it's just a filler. and yeah i was changing a lot of things in the code before i posted it. it's a rough rough draft.2012-03-15

1 Answers 1

0

Looking at the structure of this algorithm, you will find that it has two nested for loops. The outer loop runs exactly nums_to_check.size() times, so it is sufficient to multiply the average time complexity of the inner loop by that number.

Now, the inner loop obviously runs at most number-1 times, so its run-time is bounded by the maximum value occuring in nums_to_check times a constant. Therefore, a trivial upper bound for the run-time of this algorithm is $O(|N| \cdot \max N)$, where $N$ denotes the multi-set of numbers in nums_to_check (the order of numbers in $N$ is actually irrelevant).

If you allow huge numbers in $N$ (e.g., using a BigInteger class or the like), you may have to factor in bit complexity for these operations, but this depends on the chosen complexity measure (unit complexity vs. bit complexity). On the other hand, if you are content using bounded integers (e.g., unsigned long long), unit complexity is within a constant factor of bit complexity, so it does not make any difference for complexity analysis.

Finally, you may get a better estimate by analyzing the probability that the inner loop is terminated early.

  • 0
    True, but my argument will still hold even if the algorithm is changed to work with a useful version of the loop (i.e., counting from 1 to number), where early termination for abundant numbers will occur.2012-03-15