1
$\begingroup$

So it is clear than the nondeterministic versions of computational models such as the Turing Machine is equivalent in "power" to the deterministic model.

Other than showing this fact, what would be the "purpose" in learning nondeterminism?

My instictive answer to this is that the nondeterministic version of a TM is easier to describe than the deterministic version.

Would there be other reasons that you can think of?

2 Answers 2

2

Non-determinism underlies the all important P=NP question.

Non-deterministic computation has a different interpretation. Consider for example primality testing. It is not clear how to test for primality (although [provably] efficient tests are now known), but it is clear how to get convinced that a number is composite: all you need to do is find some non-trivial factorization. This is a property which is easy to verify given a witness, in this case the non-trivial factorization. We can therefore build a very simple non-deterministic verifier for compositeness:

int verify_composite(int n, int a, int b) {   return (a > 1) && (b > 1) && (n == a* b); } 

This algorithm is much simpler and faster than the known algorithms deciding primality. Now in principle, a TM could go over all possible factorizations; however, that is pretty slow, and people conjecture that you can't do it efficiently (even though you can do better than exhaustive search). That illustrates the power of non-determinism.

There are lots of combinatorial problems which are easy to check given a witness, in a one-sided fashion. One example is the Traveling Salesman Problem: what is the most efficient route a traveling salesman should take if she wishes to visits all major cities in New York? It is easy to give a witness showing that there's a route taking X days: just present the route. However, it is not so clear whether the reverse is possible; in fact, people strongly believe it is impossible to give an efficiently verifiable witness that there is no route taking X days.

Problems which are efficiently solvable are captured by the complexity class P. Decision problems which are efficiently verifiable are captured by NP. The problem of deciding whether there's a route for the salesman taking at most X days is thus in NP, but the opposite problem is not believed to line in NP (and in particular, not in P).

Karp found in the 70s that a lot of combinatorial problems in NP reduce to each other, in a sense which implies that if one of them is in P, then all of them are in P. Cook was able to show that one of these problems captures all of NP, in other words if that particular problem is in P, that all of NP is in P; such problems are called NP-complete. One of the problems studied by Karp was the one used by Cook, so all of the problems he studies are NP-complete. (Karp was in fact preceded by Cook.)

People strongly believe that NP-complete problems are not in P; this is the well-known P=NP problem (people believe the two classes are different). Thus if your problem is NP-complete, you know that it is hard to solve in the worst case. Though many such problems tend to be solvable (or at least approximately solvable) in practice (i.e. in the average case).

  • 0
    A non-deterministic *poly-time* TM. That means that there is a polynomial $P$ such that on input of size $n$, *each* execution of the TM ends in time $P(n)$.2011-02-10
2

Reason A: For some other useful computation models, nondeterminism makes a difference, for example pushdown automata.

Reason B: Nondeterminism changes runtime results. In theory, consider $P$ vs $NP$; in practice, consider (non)deterministic Quicksort.