0
$\begingroup$

At college years ago, I enjoyed the course on theoretical computer science. But there was something that always bugged me about the results: results were always listed with regards to some O(n) of a single dominating operation, say number of multiplications. Although I don't mind moving between levels of abstraction, there was something wholly unsatisfying about this sort of answer because it usually would involve some mysterious "m" factor at best for dealing with levels of number precision, etc. I'm curious about what happens when the abstractions are pulled away.

My question is this: Is there a "standard" computational model that can be applied to pick up where this level of abstraction leaves off? I realize that the representation of numbers has a profound impact on how long it takes to do calculations with them, but I'm looking for something along these lines: suppose that I determine I need to do 20 multiplications, 30 additions, and 22 comparisons between integers (which I can bound between 0 and some value x, and perhaps even tell you the distribution of them). I really feel like I should be able to get a better sense of the true "quantity" of work that needs to be accomplished, even if the number representations for this "standard" model may be non-optimal.

Further, if such a model does exist, are there any relations between it and information theory, such that this "quantity" of work can be tied back in a roundabout way to the notion of entropy? I just have this feeling like we should be able to model algorithmic complexity problems in a similar way to movements of energy in a physical system and I'm trying to find the clues to get there.

1 Answers 1

1

If you want to analyze, say, computing the determinant of an $n\times n$ matrix by row operations and expansion along columns, you can write down exactly how many additions and multiplications it will take. But nobody ever needs that level of precision. It's good enough to know that the number of multiplications is $n^3+O(n^2)$ (say - I don't claim that's the right formula, I'm just using it for purposes of illustration) because what is of interest is the behavior when $n$ is large, and when $n$ is large the contribution of the $n^2$ term is negligible and the work done on the additions is negligible. I don't find that unsatisfying at all, I find $n^3+O(n^2)$ more satisfying than (say) $[n^3+(83/16)n^2-(42/113)n+(93/257)]$.

  • 0
    I agree that the usefulness in itself is somewhat limited. My ultimate goal is to try to look at complexity from a different viewpoint where the operations are part of a unified framework rather than disparate as a first step in a larger theory. Given the lack of activity on this post, I will accept your answer based on the comment you gave above. Thanks for your answer @GerryMyerson and let me know if you ever encounter anything more.2011-11-15