I'm working on a sound recognition algorithm where an "exponential moving average" is used for "adapting" to sound levels. It turns out that taking an average of logs works better than simple sums ("smooths" erratic data better), so I used this algorithm:
newAverageX = exp((((dInterval - 1.0) * log(dOldAverage)) + log(dValue)) / dInterval)
This works well, but now I've extended into an area where I have some values that may be zero or negative, and that obviously causes problems with log
. So I tried the following:
min = minimumOf(dOldAverage, dValue); fudge = 1.0 - min; newAverage = exp((((dInterval - 1.0) * log(dOldAverage + fudge)) + log(dValue + fudge)) / dInterval) - fudge;
This avoids errors with log of a negative, but the results for positive values are different from the original.
Is there an algorithm that would produce the "logically correct" results (whatever that means) for all cases. Or is there another averaging technique that would work similarly to using log above but be tolerant of negatives?