I'm going through a presentation on the topic of analysis of algorithms and I encountered the following inequalities as part of the steps being explained:
- the loop stops: $n/2^k ≤ 0$
- that is: $n < 2^k$
- thus, when the loop stops: $k > ⌈lg n⌉$
(Note: For the sake of being explicit, $lg n$ is shorthand for $log_2 n$. Also, $n/2^k$ can be assumed to be integer division.)
Now, it's been a while for me, but from what I've been checking, it looks like the change from ≤ into < on step 2 might be a typo(?), except for the fact that it carries over to subsequent steps.
If it's not a typo, the only thing I can think of is that it's related to the 0 on the right when multiplying by $2^k$ on both sides of step 1, but I couldn't immediately find an identity or property that justified changing the signs from ≤ to < or ending with a non-zero value on the right-hand side.
Can someone explain what I'm missing between these steps?
The pseudo-code for the algorithm in question, quoted from the slide, is below:
p = 1;
e = a;
N = n;
while (N > 0) {
if (n % 2 != 0)
p *= e;
e *= e;
N /= 2;
}
It's analyzed to be in the order of O(lg n). A few notes for clarity, though not necessarily relevant to the original question:
nis the input size of the problem instance, and it's a non-negative integer (e.g. 50)- statements like
p *= eare shorthand for $p \leftarrow p * e$ ais not defined anywhere, but can be assumed to be an arbitrary, though unknown, integer constant