1
$\begingroup$

I have encountered several times, while doing mathematics, the following situation: We have a finite "sequence" $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ of some objects that has the (meta-mathematical) property that the value of $p$ isn't known beforehand - and we get it only if we " run''/''compile'' the "sequences" $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ (hence the use of the " ", since this "sequence" isn't well defined if its domain isn't know at the time of its definition). So this is actually an algorithm, since in algorithm there also a "runtime" also doesn't have to be specified at the beginning.

  1. How can we get around this difficulty, i.e. how can we rigorously define $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ ?

    I always thought that we translate the "sequences" $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ to another mathematical object for which one doesn't need to specify how "long'' it can run. For example I could mimick one step of $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ by a function $f$, such that $a_{2}=f(a_{1}),\ a_{3}=\left(f\circ f\right)\left(a_{1}\right)$ and so on and let $f$ attain some specific value $\alpha$, if the iteration wasn't succesful for. Then I could just define my $p$ as the smallest number such that $f$ equals $\alpha$ - and now I could rigorously define $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$.

  2. Is the above idea correct ? Do you know of other such methods ?

  3. There are of course algorithms such that only knowing the input there is a closed expression (involving only certain function - not arbitrary ones like my $f$ above) that tells me, how long my algorithm will run, but I don't think that that is the case for all algorithms. Is that true ? Can someone contribute something interesting ? (I know, I'm going slightly off-topic here).

  4. Is there an area in mathematics that deals with questions like 1.-2. ? Maybe proof theory? What about questions like 3. ?

Note: If you should ask me for examples of such "sequences" $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ : The euclidean algorithm, the greedy algorithm for a minimal spanning tree of graph...actually take any maths book that contains the word "algorithm", since a (well-defined) algorithm is nothing but a finite sequence, for which we generally don't know yet, how long it will run (I haven't checked, if for these two algorithms there is a simple closed expression in terms of "classic'' functions, that determine beforehand how long they run...if there is, pretend you don't know it and have to take the route described in 1. to get your number of iterations until the algorithm terminates )

  • 0
    It seems l$i$ke you're unhappy w$i$th the idea of a sequence that we don't know the length of. Such a sequence is still perfectly fine. For example, consider the sequence of twin primes. It might be finite, or it might be infinite. We don't know. But that doesn't mean that the sequence doesn't exist!2012-12-07

2 Answers 2

3

How can we get around this difficulty, i.e. how can we rigorously define $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ ?

I always thought that we translate the "sequences" $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ to another mathematical object for which one doesn't need to specify how "long'' it can run. For example I could mimick one step of $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ by a function $f$, such that $a_{2}=f(a_{1}),\ a_{3}=\left(f\circ > f\right)\left(a_{1}\right)$ and so on and let $f$ attain some specific value $\alpha$, if the iteration wasn't succesful for. Then I could just define my $p$ as the smallest number such that $f$ equals $\alpha$ - and now I could rigorously define $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$.

What you've described is the basic idea of Turing Machine. A Turing Machine transforms one tape state ($a_n$) to another tape state ($a_{n+1}$) according to a fixed and finite-sized state transitions table ($f$ is well defined given any valid tape state and head state, except for the terminal states) and an internal state of the head.

It should be noted that the hypothetical Turing Machine have an infinite length tape, however due to physical limitations, real-life computers always have a finite amount of RAM to work with (real life computers are only a Linear-bounded automata, which is strictly weaker than Turing Machine due to the lack of infinite memory).

You could define the sequence $\left(a_{n}\right)_{n\in\left\{ 1,\ldots,p\right\} }$ such that $a_k$ is the state of the Turing Machine at time $k$.


Is the above idea correct ? Do you know of other such methods ?

As for alternative formulations, there are Lambda Calculus and μ-recursive functions. It was later proven that even though they look very different from the outset, they were actually computationally equivalent to the Turing Machine (i.e. what is computable in a Turing Machine is computable in Lambda Calculus and μ-recursive functions and vice versa).

There are also computation models that are NOT computationally equivalent to Turing Machine, for example, the Finite State Machine and Pushdown Automata; these models are strictly weaker than the Turing Machine (i.e. there are algorithms that can be computed in a Turing Machine that cannot be computed on these machines).


There are of course algorithms such that only knowing the input there is a closed expression (involving only certain function - not arbitrary ones like my $f$ above) that tells me, how long my algorithm will run, but I don't think that that is the case for all algorithms. Is that true ? Can someone contribute something interesting ? (I know, I'm going slightly off-topic here).

I think you should read about the Halting Problem, excerpt from Wikipedia:

In computability theory, the halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.

In other words, Alan Turing proved that it is not always possible to know how long the program will run given a certain input.


Is there an area in mathematics that deals with questions like 1.-2. ? Maybe proof theory? What about questions like 3. ?

Computer Science, in general, or more specifically Computability Theory (a.k.a. Recursion Theory). There are also areas of undecidable problems.

1

Let $C$ be a class (set) of objects and consider the class (set) of finite sequences with members in C. We can refer to the latter class as $C^*$. Each $s\in C^*$ is a finite (possibly empty) sequence of elements of type $C$. Just as the class of natural numbers "comes with" a "built-in" successor operator that takes a number $n$ and returns the number $n+1$, so does our class $C^*$ come with two built-in operators: the length operator $\ell$, returning for each $s\in C^*$ $s$'s length $s.\ell$ and an index operator, taking an element $s$ of $C^*$ and a natural number $n\in\mathbb{N}_0$ and returning the $s$ element (of type $C$) at the $n$th place, $s_n$ (if $s.\ell\leq n$, $s_n$ is some unspecified $C$ value, in other words in this case $s_n$ is undefined). Now, you can define, say, for each $n\in\mathbb{N}_0$ the set $C^n$ of $C$-sequences of length $n$ thus: $C^n:=\{s:C^*\left|:\space s.\ell=n\right.\}$

  • 0
    In most form$a$l defi$n$itions of algorithm must always have a "terminating condition", i.e. a condition that when reached will mark the end of the algorithm, e.g. when a certain character is found in the sequence (e.g. EOF or your α). A sequence of steps that does not have a terminating condition is usually called computational method/computational process.2013-01-07