12
$\begingroup$

This week we had a bright interviewee who claimed that array has constant search time and map has even faster than that search time. Now to me if some algorithm has O(1) time complexity the only way for another equivalent algorithm to be faster is to have a smaller constant coefficient in O(1) estimate (like one algorithm takes at most 230 primitive operations and another takes at most 50 primitive operations and is therefore faster although both have O(1) complexity).

Is it possible for an algorithm to be faster than O(1) except having a smaller coefficient in the estimate?

  • 0
    @MartianInvader: no, you cannot add constants, $O(\frac{1}{n}) \subset O(1)$. That is usually possible because the function itself has superconstant growth.2011-07-23

4 Answers 4

13

It is both reasonable and common to assume that any algorithm needs at least a positive constant amount of time for every input. For example, any useful algorithm should answer something (e.g. YES/NO or some number, string, etc.), and it is reasonable to assume that doing so takes at least some constant amount of time. Under this assumption, no algorithm can have a subconstant time complexity.

(In actual computers, this constant minimum amount of time may become smaller by advance in science and technology, but no matter how fast computers become, it is still there as a constant which does not depend on the input size.)

Vhailor comments that a hypothetical algorithm which waits 1/n seconds, where n is the input length, would satisfy the condition. The argument in the above assumes that no such algorithm exists. To justify this, I would argue that it is unreasonable to assume that a machine can wait 1/n seconds for arbitrary large n, because that would require faster and faster information processing as n grows.

Sometimes you may hear “subconstant-time operations,” but before it freaks you out, check what it really means. Usually, it means that the required time divided by some other parameter is subconstant, not that the time itself is subconstant.

  • 0
    @ShreevatsaR: Oops, thanks for the correction!2011-07-22
5

Others have already argued that in the context of sequential algorithms and classical models of computation (e.g., Turing machines or the RAM model), a running time of $0$ makes little sense. However, other models exist.


In the context of distributed algorithms, we usually define that the running time of an algorithm equals the number of communication rounds.

With this definition, we have got not only non-trivial graph algorithms with running time $O(1)$, but also some graph algorithms with running time $0$.

As a simple example, the randomised 2-approximation algorithm for maximum cut has running time $0$ in this model: there is no need to communicate with the neighbours; each node (simultaneously in parallel) just flips a coin and produces its own local output based on the random coin flip.


So it is not just the case that one could, hypothetically speaking, define a model of computation in which a running time of $0$ would make sense, if we abuse some contrived definition of "time".

We do have such models, they are actively studied, and the definition of "time" is natural for these purposes. In particular, with precisely this definition, time (number of communication rounds) and distance (shortest-path distance in the graph) become more or less equivalent concepts.

4

The Big-O of an algorithm is the Big-O of the number of steps a Turing machine takes before halting. So consider the trivial Turing machine where the initial state is in the set of final states.

$ q_0 \in F $

The number of steps it takes before halting is 0.

Now consider the formal definition of Big-O:

$ f(x) = O(g(x)) $ if and only if $ \exists M>0, x_o $ such that $ |f(x)|\leq M|g(x)| $ for all $ x > x_o $

The Big-O of the trivial Turing machine is then $ O(0) $ which is "faster" than $ O(1) $ in some sense.

  • 3
    es, but it depends on how we define the number of steps exactly, I would say that the trivial TM takes 1 step before halting.2011-07-23
2

I think Qiaochu Yuan was probably joking, but is actually on the right track, in the sense that the answer to the question really depends on what theoretical, physical, or practical model of computation you have in mind.

It's not unreasonable to imagine models in which a computation can take 0 time. For example, you could have a machine made of gears and levers where you set up the gears in a certain position as an input. To get the output, you turn a crank until, after k revolutions, it freezes up and won't turn anymore. The number k is both the output and the running time. It's possible that k is zero, so the running time is zero. Of course this doesn't count the time it takes to walk into the room, set up the inputs, grasp the crank, etc. But it's not completely unreasonable to imagine a useful mathematical model where you don't count those things as running time.

It's also not unreasonable to imagine that you could have faster and faster information processing as n grows. Sometimes more information makes your task easier. If I'm a book editor and my job is to take a pile of submissions, select the best one, and then write the author a request for the necessary revisions, my job could get easier when the pile is taller, because I can afford to be more picky, I'll probably get a better manuscript, and it won't require as many revisions.

However, I think there is a more fundamental issue that prevents anything from being faster than O(1), which is that we're talking about digital computing, not analog. Whether it's a Turing machine or a desktop computer or the system of gears and levers described above, a digital computer typically operates in discrete cycles. You can get a result in 0 cycles, or in 1 cycle, but you can't have half a cycle. If you really want to have O(1/n) performance, then there has to be some $n_o$ such that for $n \ge n_o$ the running time is no more than 1/2 a cycle. This means that for $n \ge n_o$ the computation always terminates in 0 cycles. The example of the levers and gears shows that it is possible for a computation to terminate in 0 cycles, but there can't be any such thing as a nontrivial computation that always terminates in 0 cycles for large n. For a computation like that, you simply wouldn't need to use the machine for $n \ge n_o$. Another way of putting it is that if your computation always terminates in 0 cycles for $n \ge n_o$, then your computation isn't just faster than O(1), it's O(0), which means it's trivial.

So I think the answer is that performance faster than O(1) is not reasonable for a digital computer, and since big-O notation is really designed for digital computers, it's not obvious how to pose the question in the case of analog computers.

  • 0
    @chazisop: When you say that one has to observe the output, that doesn't necessarily imply that the observation takes constant time. In the particle example, "observe" might mean to measure the angle with a protractor, and then that would certainly take$O(1)$time. But that's imposing an analog-to-digital step that may not be necessary. For example, the $1/x^2$ analog computer may simply be one module in a much larger and more complicated analog computer.2011-07-24