3
$\begingroup$

Suppose one has an algorithm to solve a problem using at most f(n) computations with size of input n. How to prove, if such is the case, that this algorithm is the fastest possible for solving this problem?

I am aware of the open P!=NP problem, but I assume there are nontrivial problems where it has been proved that no faster algorithm exist? Which are such examples, and how is it proved?

  • 0
    Curiously, there is a large class of problems, including many known NP-complete ones where _Levin's optimal search theorem_ provides a concrete algorithm, with guaranteed optimal _asymptotic_ running time. The catch is that it is essentially impossible to _analyze_ the running time of this algorithm without already knowing what the fastest possible running time is -- and also that Levin's algorithm has gigantic constant factors in its running time (where "gigantic" usually means "more than a googol").2012-01-22

2 Answers 2

5

There's no general way, but only a wealth of techniques. Here are some examples:

  • You prove that $f(n)$ is the minimum time required to read the input. For example, shortest path problem cannot be solved in less than $O(V + E)$.
  • You prove that an algorithm with fewer will not get enough data about the input to output the correct solution. Thus, once can change the input in a way that changes the result without the algorithm "noticing" the difference. An example of this would be the proof that sorting cannot be done in less than $O(n \log n)$.
  • You prove that an algorithm running in less time will lead to an existence of an algorithm that would solve a problem in less than the optimal time (e.g. sorting in less than $O(n \log n)$). This is achieved via reduction.
  • You prove that the size of the output is in worst case as big as $f(n)$ therefore, one cannot take shorter time to compute it. For example, outputting the $n^{th}$ Fibonacci number cannot be solved in less than $O(2^{n})$ since the $n^{th}$ Fibonacci is ~ $\phi^{n}$ where $\phi$ is around $1.6$

There are more techniques, but those are some examples off the top of my head.

EDIT: As pointed out in the comments, the size of the $n^{th}$ Fibonacci numbers is exponential in the length (read log) of n, therefore it cannot be computed in less than $O(n)$ not $O(2^{n})$.

  • 0
    @Qiaochu Complexity of matrix multiplication does not play a role here; we are multiplying matrices of fixed size, so this takes constant number of arithmetic operations. The problem is with integer multiplication, and it can be done using FFT, the [fastest currently known method](https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm) takes $n \log n 2^{O(\log^{\ast} n)}$ time.2012-01-21
4

Sorting by comparisons can't be done in $o(n\log n)$ because $k$ comparisons can't distinguish among more than $2^k$ orderings, and there are $n!$ possible orderings, and Stirling's formula gives you $\log(n!)$ is on the order of $n\log n$. (edited in response to spot-on comments by Raphael)

  • 3
    "faster than $O(.)$" is not very meaningful; you mean "in o(.)$. Note also that the famous bound only applies to comparison sorts.2012-01-21