I want to normalize the FFT used in scilab in a way so that the absolute values of the coefficients equal to the amplitudes of the time domain signal with that frequency.
Example: I want an input sine [0, 5.5, 0, -5.5] to transform to [0, 5.5, 0, 5.5] (absolute values).
For that I used some very simple sines and cosines with an amplitude of 1, and transformed them to see what the fft yields. N is the sample size.
// cos, N = 4 -->fft([1 0 -1 0]) ans = 0. 2. 0. 2. // looks like I have to divide by N/2 // sin, N = 4 -->fft([0 1 0 -1]) ans = 0 - 2.i 0 2.i // same conclusion // sin, N = 8 (2 periods) -->abs(fft([0 1 0 -1 0 1 0 -1])) ans = 0. 0. 4. 0. 0. 0. 4. 0. // same conclusion // cos, N = 8 -->abs(fft([1 0.7 0 -0.7 -1 -0.7 0 0.7])) ans = 0. 3.98 0. 0.02 0. 0.02 0. 3.98 // same conclusion
But:
// cos, N = 2 -->abs(fft([1 -1])) ans = 0. 2. // looks like I have to divide by N // cos, N = 1 -->abs(fft([1])) ans = 1. // looks like I have to divide by N
The N/2 rule also applies to much larger values for N, but not for N = 2 and N = 1. Is this mathematically explainable, or is scilab's FFT scaling just arbitrary? Or am I missing something basic?