5
$\begingroup$

My question is very general, and the kind of answer I look for would be as low level as it could be. I think I may illustrate my query more succinctly with an example.

In propositional logic, you have the modus ponens rule which takes two hypothesis and asserts a conclusion. The hypothesis are stated as "static" pieces of code -they are just expressions, like the axioms of any theory; the same applies to the (detached) assertion. And finally, the rule itself is also a piece of code -and "static" in the sense loosely stated above. In essence, a computation using the modus ponens rule develops this way:

$\text{1) First Hypothesis: }\vdash {P}\ \\\text{2) Second Hypothesis: } \vdash P \rightarrow Q \\ \text{3) Rule Modus Ponens: }\vdash \{ \vdash P\vdash P\rightarrow Q \vdash Q\}\\ \text{4) Assertion: }\vdash {Q}$

In the example above I used the turnstile symbol to denote the notion of "provable" or "given".

Even though every step makes up a piece of code, something which could be equated to a string of bits in a memory tape ("static"), clearly between steps 3) and 4) the "detaching" of the assertion takes place. This action of detaching -the computation- appears as an irreducibly "dynamical" process, something which even if described by the four static written steps, constitutes a concept that seems not to be actually "captured" by them.

Extrapolating this sort of overly simplistic example, I feel troubled with the task of reconciliating both of the ideas of computation-as-an-object (for example, as a list of instructions to perform a given procedure, i.e. a computer program) and the notion of computation-as-an-action (the dynamical act of producing and spitting out the output, from the input data and the procedural rules).

Is there something I'm missing, misinterpreting, or plainly wrongly stating here?

  • 0
    @MarcvanLeeuwen: I'd appreciate if you could elaborate more on the fundamental difference between hypothesis and rules of inference, which makes it wrong to "place them on the same level". I tend to view everything needed for a computation to be performed, as pieces of code -and the encoding in a bit-string applies equally to premises and rules, with the obvious differences in syntax, and nothing else- because that's how a computer program is written down to machine code; but then I get troubled with the concept of "change" associated to the computational process itself, from input to output.2012-08-29

3 Answers 3

6

Perhaps the issue real here lie deeper, and indeed doesn't particularly concern computers. Get clear about the underlying general issue and the more specific puzzle about computers should evaporate.

In a paper in Mind in 1895 ("What the Tortoise said to Achilles"), long before the era of computers, Lewis Carroll already poses the same basic puzzle. So, suppose the Tortoise is given the premisses (i) $P$ and (ii) if $P$ then $Q$, but refuses to accept the conclusion $Q$.

You try to help him out, patiently explaining that (iii) if $P$ and if $P$ then $Q$ then $Q$. The Tortoise reponds, "OK, if you say so. I now accept (i), (ii), and (iii). But I still don't accept $Q$".

Well, you could offer him (iv) if $P$, and if $P$ then $Q$, and if $P$ and if $P$ then $Q$ then $Q$, then $Q$. The Tortoise, a cooperative fellow, says "You might be right. OK, I trust you. I'll accept premiss (iv) too. Now I've got four premisses. But what compels me to accept $Q$?"

You can see that giving the Tortoise yet another premiss isn't going to help!

What's the moral? We might put it this way: accepting a rule of inference like modus ponens should not be confused with accepting that a proposition like (iii). Or we can put it this way (as Gilbert Ryle did in a famous paper on Knowing How and Knowing That in 1948): knowing a rule of inference is a matter of knowing how to infer correctly, and that's knowing how to do something, which is different from knowing that something is the case, e.g. knowing that (i) and (ii) or indeed (iii) are true.

Not all knowledge is straightforwardly propositional knowledge; some is, as we might now say, procedural knowledge, knowledge about what to do. As the case of the poor Tortoise shows, getting more propositional knowledge doesn't in itself give you knowledge about what to do. And what goes for Tortoises goes for computers. Giving the computer more data isn't the same as teaching it to do stuff with the data. Feeding it the information that $P$ and that if P then Q is one thing; teaching it to draw inferences in something else. (That's why using the turnstile notation in setting up the problem both for assertions of data and for specification of a rule of inference is so horribly misleading.)

  • 0
    I loved your paper on induction and predicativity. Thanks for writing it, and thanks for posting here.2012-09-17
0

This is a bit off the cuff, so hopefully it won't get to wobbly.

The first step of defining computation is to fix a model. This is essentially what Church and Turing did with the $\lambda$-calculus and Turing Machines respectively. Being a certain flavour of computer scientist, I tend to think in terms of Turing Machines, so I'll reference them, but you can really stick any model of computation in (and there's no need to stick Church-Turing thesis in particular, i.e. we don't need at this point to know what truly encompasses all computation, just that there is some thing that does).

A Turing Machine (TM) is a collection of states with a transition function that describes what happens when certain input is encountered in each given state. It may or may not halt, it may be deterministic or not etc. Now I think the key point is that the TM is itself the computation. The description of what the TM does isn't, it's just a description.

Relating this to a computer and a program, the computer is (roughly) a universal TM, the program, where it able to run without the computer would be a TM, but when run on a computer is just input. It isn't the computation, the computation is the computer producing the output from the input.

In your examples, the list you give (depending on your perspective) is really a description of the "current" state of the computation at 4 given points. The computation is the process that with input passes through those states.

  • 0
    I assert that in the case of having access to a TM which achieves universality, dealing only with formal descriptions of TM's is justified, because you can always feed the description to your UTM as a string of bits. Then you are free to blend together the notion of "object" and "description" because the computational behavior of the TM descripted, and the computational behavior of the UTM fed with the description of the TM, are indistinguishable.2012-08-30
0

It seems to me that your confusion is because of not distinguishing between the execution (of an algorithm) with the code (of an algorithm).

An object like "a car" is different from an act performed by it like "driving a car".

  • 0
    It's not that I can't distinguish the concepts, my issue is accepting the fact that they cannot be "blended" together in any satisfactory way. For example, check the "Proof Explorer" presentation of the axioms of FOL as done by Norman Megill's [Metamath site](http://us.metamath.org/mpegif/mmset.html); the only two rules he introduces (MP and Gen) use symbols not included in the formalism presented. The whole idea of "having a closed description of a formalism" seems to fail as you always need an external input to perform the computation, i.e. the execution concept isn't self-contained.2012-09-17