I try to learn Computation Complexity by Sipser's textbook "Introduction to the Theory of Computation".
The problem is I have a lack in understanding how Turing Machine is working. Example from the textbook.
Let's analyze the algorithm we gave for the language $A=\{0^k1^k | k \geq 0\}$. We repeat the algorithm here for convenience.
$M_1$="On input string w:
- Scan across the tape and reject if a 0 is found to the right of a
- Repeat if both 0s and 1s remain on the tape.
- Scan across the tape, crossing off a single 0 and a single 1.
- If 0s still remain after all the 1s have been crossed off, or if 1s still remain after all the 0s have been crossed off, reject. Otherwise, if neither 0s nor 1s remain on the tape, accept."
The question is why do we need to repeat before each crossing, why we cannot just cross over all 0s and all 1s one by one in $O(n)$ and continue to the step 4. According to the textbook each scan uses $O(n)$ steps and each scans occur $n/2$ times. So if each scan occurs $n/2$, on every step is should cross over 2 symbols: one 1 and one 0, which is can be done in linear time, so where $O(n)$ comes from? In result, the following algorithm runs in $O(n^2)$.
An improvement that comes later suggest to crossing over two 1s and 0s on every step that improves the running time to $O(nlogn)$.