Is there a counterexample or theorem for the question below?
Examples:
Let's look at some examples first to get the feel of it. We'll let $n$ be the length of the input in characters. All the examples are recognizers that aren't intended to produce any output other than "accept" or "reject".
Linear search. We're given a sequence of characters. An input character is read; if it matches what is searched for, we terminate with success. If not, that input can be thrown away since it doesn't match and we go on to the next character in the input.
Deterministic context-free parsing. An input word is read and either shifted onto the stack or reduced with the existing stack, to form a new stack. Since the parser is just a recognizer, we don't have to remember the original input word once it has been reduced into a nonterminal.
Final character matching. This is an artificial example to clarify the question. The format is $n-1$ input characters that don't matter and a final character that does. If the final character matches a fixed constant character, we succeed, otherwise we fail. Again, we can simply throw away all non-final characters as they're read since they don't matter.
In each case, gradual progress is being made toward a solution each time an input is read, where "gradual progress" is a gradual transformation of the machine configuration into a final state such that at some step during the transformation, enough information is lost to make it impossible to reconstruct the original input: that is, the computation becomes irreversible to the extent that it is impossible to reconstruct the original input. Is the following paragraph true for every polynomial-time recognizer? Is there a theorem to back it up? Or is there a counterexample?
Question:
(Notation: L is a language in P, and M is a polynomial-time recognizer for L.) For all L there exists an M (subject to the constraints below) where there is progress (defined above) during the computation. More precisely, there is no L in P such that for every constrained (see below) M for L there is no progress during the first $n-1$ characters of the computation and then a "sudden flash" that produces the answer on encountering the final input character.
We are given two constraints:
Irreducibility. There is no polynomial-time reduction from a "sudden flash" machine to a gradual one, so we can't just create a wrapper machine which batches up the inputs and at the end simply passes it to a gradual machine to actually carry out the computation.
Incompressibility. We are also given that there is no polynomial-time compression transformation of an input $n$ characters in length to a size substantially less than linear: that is, of a complexity class less than $O(n)$; see example 3 above (this gets rid of highly redundant examples).
EDIT: In one of the comments I suggested a proviso; this is no longer necessary. In the case of string matching, I mentioned that there is an algorithm which simply throws away initial characters as long as they match, thus providing irreversibility. In the case of a constant test string, I had suggested a finite-state solution, but this is not necessary since there is simply a polynomial-time reduction to the first string matching strategy in this paragraph. So the proviso I suggested is withdrawn; the question stands.