Why does the Cooley-Tukey Fast Algorithm take $O(n \log n)$ time? The book derives this from the fact that evaluation takes time:
$T(n) = 2T(n/2) + O(n)$
and then uses the Master Theorem to arrive at the Big O Notation Time. Could someone explain how the above equation was derived?
How I see it:
After a recursion, you are dealing with twice the number of polynomials ($A_o(x)$ AND $Ae(x)$) at half the number of points, (since the points are plus-minutes pairs) and each polynomial is of half the size. So why isn't the time:
$T(n/2)+O(n)$
The size of the problem reduces by half (since the degree of $Ao(x)$ and $Ae(x)$ are half of $A(x)$) but your dealing with the same number of problems. ($A_o(x)$ and $A_e(x)$ at half the number of points.) What am I misunderstanding?
Example
Lets say you are evaluating polynomial $A(x)$ at points: $a, -a, b, -b.$ Therefore you have 4 polynomials to evaluate: $A(a), A(-a), A(b), A(-b).$
Using the algorithm, you would instead evaluate the odd and even coefficient polynomials. Therefore you would evaluate:
1) $A_e(a^2)$
2) $A_o(a^2)$
3) $A_e(-a^2)$
4) $A_o(-a^2)$
5) $A_e(b^2)$
6) $A_o(b^2)$
7) $A_e(-b^2)$
8) $A_o(-b^2)$
Each polynomial is half the size of the original polynomial. However $A_e(a^2)$ and $A_o(a^2) = A_e(-a^2)$ and $A_e(-a^2)$. The same thing applies for $b$ and $-b$. Therefore you would actually only need to evaluate 4 polynomials:
1) $A_e(a^2)$
2) $A_o(a^2)$
5) $A_e(b^2)$
6) $A_o(b^2)$
Therefore you end up with the same amount of problems, but each problem is half the size, since $A_o$ and $A_e$ have half the degree of $A(x)$. Therefore 4 problems of size $n$ has been transformed into 4 problems of size $n/2$! The number of problems have stayed the same, only the size has changed, leading me to the conclusion that the time is:
$T(n) = T(n/2)+O(n)$