0
$\begingroup$

I have the following pseudocode to figure out the approximate running time of. Anyone able to help but please explain each step and reason. thanks in advance

int m = n;  while (m > 0)     for k= 1 to m         //dowork  ---- CONSTANT       m = floor(m/2) 

Another Algorithm I would appreciate a break down of please. How would I compute the running time of this algorithm?

NB. This is taken from the wiki site writeup on merge sort but since that did not help, I was wondering if someone here would help break it down so I get where the O (n log n) comes from, for both best case and worst case.

http://en.wikipedia.org/wiki/Merge_sort

A

 function merge_sort(m)         if length(m) ≤ 1             return m         var list left, right, result         var integer middle = length(m) / 2         for each x in m up to middle              add x to left         for each x in m after middle              add x to right         left = merge_sort(left)         right = merge_sort(right)         result = merge(left, right)         return result 

B

 function merge(left,right)         var list result         while length(left) > 0 or length(right) > 0             if length(left) > 0 and length(right) > 0                 if first(left) ≤ first(right)                     append first(left) to result                     left = rest(left)                 else                     append first(right) to result                     right = rest(right)             else if length(left) > 0                 append first(left) to result                 left = rest(left)             else if length(right) > 0                 append first(right) to result                 right = rest(right)         end while         return result 
  • 0
    @user10695: I believe you mean `while (m > 0)`, not `while (m < 0)`. Otherwise, it won't run for any positive value of $n$, and will diverge for any negative value of $n$.2011-07-14

2 Answers 2

2

I don't know if another answer can be more illuminating to you, but let me try saying it a bit differently anyway.

First, consider the inner loop:

    for k= 1 to m         //dowork  ---- CONSTANT 

Because it's doing a constant amount of work $m$ times (for k=1 to m), this takes approximately time $m$, whatever the value of $m$ is. (To be precise, it takes $\Theta(m)$ time, which means that there are constants $c_1$ and $c_2$ such that the time it takes is between $c_1m$ and $c_2m$, but when finding the "approximate running time", i.e., the asymptotic complexity, we usually ignore constant factors.)

Now the outer loop looks like

m = n while (m > 0)       //Do 'm' amount of work       m = floor(m/2) 

where I've replaced the inner loop with what we know about its time. So the first time the loop is run, $m=n$ and it takes time $n$. By the second time the loop is run, $m$ is halved, so $m = n/2$ and it takes time $n/2$ (I'm ignoring writing $\lfloor n/2 \rfloor$, because that's within a constant factor of $n/2$.) The third time it takes $n/4$, etc. So the total time taken by the code is: $\begin{align}&n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \dots \\ &= n\left(1 + \frac12 + \frac14 + \frac18 + \dots\right)\end{align}$ until the term becomes less than $1$ (so $m$ would have become $0$ and the code would have halted).

Now, the sum $\left(1 + \frac12 + \frac14 + \frac18 + \dots\right)$ is at most $2$, so the above sum for the running time is at most $2n$. Ignoring constant factors, we can say that the code takes approximately time $n$, which is shorthand for saying that the time it takes is is linear in $n$. Or, if we did the whole thing formally, we would have said it takes time $\Theta(n)$.

(As it happens, we can analyse the number of terms in the sum: if the last term is $\frac{n}{2^k}$ (each term is of this type), then $k$ is such that $2^k \le n < 2^{k+1}$, which means $k = \lfloor \lg n \rfloor$, but all this is irrelevant to the actual problem here.)

  • 0
    @user10695: Those are entirely di$f$ferent from the original question here, and certainly should be asked as a separate question if at all. But it's more efficient for you to get a good book and read it: the second chapter of *Algorithm Design* by Jon Kleinberg and Éva Tardos has a good tutorial on this kind of analysis.2011-07-16
1

Assuming that floor(m/2) means 'replace m with floor(m/2)' then:

Assume $n$ is a power of 2, so that $n=2^p$. You set $m = n$ in the first line. In the first line of the while loop you iterate through $k = 1...m$, which takes $am = 2^pa$ operations, where $a$ is the number of steps in your //do work block.

Then you halve $m$ - the amount of work here is nontrivial, let's call it $d$. It will almost certainly turn out to be insignificant, but...

Now you iterate through $k=1...m/2$, doing $am/2 = 2^{p-1}a$ operations, and again halve m, taking $d$ operations.

In the next pass you do $am/4 + d$ operations, in the pass after you do $am/8 + d$, until in the final pass you do $a$ operations.

You went through the while loop $p$ times, and the total amount of work you do is

$\begin{align} pd + a(1 + 2 + \cdots + 2^p) & = pd + (2^{p+1}-1)a \\ & = d\log_2 n + (2n - 1)a \end{align}$

If $a>d$ (likely in applications) and $a$ is independent of the loop value $k$ (may or may not be true) then the running time is linear in $n$.

  • 0
    Definitely trying to learn from it all but I am still about 20% on doing this but would like to master it.2011-07-15