1
$\begingroup$

I have some code that does double exponential point smoothing. In addition to the points to be smoothed, it accepts two inputs as "alpha and gamma" values and outputs something called "trend" in addition to smoothed points.

What are these values? What is the best setting for them to attain optimal smoothing?

Thanks much in advance for any help

Here's the code in question

#include "ext.h" #include "ext_common.h"  void *this_class;  typedef struct _f0ext {     t_object x_ob;     double x_valLeft;     double x_valMiddle;     double x_valRight;     double x_valOutLeft;     double x_valOutRight;     void *x_outLeft;     void *x_outRight; } x_f0ext;  void *f0ext_new(double value1, double value2); void f0ext_int(x_f0ext *f0ext, long value); void f0ext_float(x_f0ext *f0ext, double value); void f0ext_ft1(x_f0ext *f0ext, double value); void f0ext_ft2(x_f0ext *f0ext, double value); void f0ext_set(x_f0ext *f0ext, double value); void f0ext_bang(x_f0ext *f0ext); void theFunction(x_f0ext *f0ext); void f0ext_assist(x_f0ext *f0ext, void *box, long msg, long arg, char *dst);  //---------------------------------------------------------------------------------------------- void main(void) {     setup((Messlist **)&this_class, (method)f0ext_new, 0L, (short)sizeof(x_f0ext), 0L, A_DEFFLOAT, A_DEFFLOAT, 0);     addbang((method)f0ext_bang);     addint((method)f0ext_int);     addfloat((method)f0ext_float);     addftx((method)f0ext_ft1, 1);     addftx((method)f0ext_ft2, 2);     addmess((method)f0ext_set, "set", A_FLOAT, 0);     addmess((method)f0ext_assist, "assist", A_CANT, 0);     finder_addclass("All Objects", "f0.smooth2");     finder_addclass("Math", "f0.smooth2");     post("f0.smooth2 v1.11-win; distributed under GNU GPL license");        //target specific } void *f0ext_new(double value1, double value2) {     x_f0ext *f0ext;     f0ext= (x_f0ext *)newobject(this_class);     f0ext->x_valLeft= 0;     f0ext->x_valOutLeft= 0;     f0ext->x_valOutRight= 0;     if(value1==0&&value2==0) {         f0ext->x_valMiddle= 0.15;         f0ext->x_valRight= 0.3;     } else if(value1!=0&&value2==0) {         f0ext->x_valMiddle= CLIP(value1, 0, 1);         f0ext->x_valRight= 0.3;     } else {         f0ext->x_valMiddle= CLIP(value1, 0, 1);         f0ext->x_valRight= CLIP(value2, 0, 1);     }     f0ext->x_outRight= floatout(f0ext);     f0ext->x_outLeft= floatout(f0ext);     floatin(f0ext, 2);     floatin(f0ext, 1);     return(f0ext); } void f0ext_assist(x_f0ext *f0ext, void *box, long msg, long arg, char *dst) {     if(msg==ASSIST_INLET) {         switch(arg) {             case 0:                 sprintf(dst, "values to smooth (int/float)");                 break;             case 1:                 sprintf(dst, "smoothing constant alpha (float)");                 break;             case 2:                 sprintf(dst, "smoothing constant gamma (float)");                 break;         }     } else if(msg==ASSIST_OUTLET) {         switch(arg) {             case 0:                 sprintf(dst, "smoothed output (float)");                 break;             case 1:                 sprintf(dst, "trend (float)");                 break;         }     } }  //---------------------------------------------------------------------------------------------- void f0ext_int(x_f0ext *f0ext, long value) {     f0ext->x_valLeft= value;     theFunction(f0ext); } void f0ext_float(x_f0ext *f0ext, double value) {     f0ext->x_valLeft= value;     theFunction(f0ext); } void f0ext_ft1(x_f0ext *f0ext, double value) {     f0ext->x_valMiddle= CLIP(value, 0, 1); } void f0ext_ft2(x_f0ext *f0ext, double value) {     f0ext->x_valRight= CLIP(value, 0, 1); } void f0ext_set(x_f0ext *f0ext, double value) {     f0ext->x_valOutLeft= value; } void f0ext_bang(x_f0ext *f0ext) {     theFunction(f0ext); }  //---------------------------------------------------------------------------------------------- void theFunction(x_f0ext *f0ext) {     double St0, St1, Bt0, Bt1, a, g, Yt0;     a= f0ext->x_valMiddle;     g= f0ext->x_valRight;     Yt0= f0ext->x_valLeft;     St1= f0ext->x_valOutLeft;     Bt1= f0ext->x_valOutRight;     St0= a*Yt0+(1-a)*(St1+Bt1);     //DES - Double Exponential Smoothing     Bt0= g*(St0-St1)+(1-g)*Bt1;     f0ext->x_valOutLeft= St0;     f0ext->x_valOutRight= Bt0;     outlet_float(f0ext->x_outRight, f0ext->x_valOutRight);     outlet_float(f0ext->x_outLeft, f0ext->x_valOutLeft); } 

Joe

  • 0
    @JoeStavitsky Variable `y` is an iterator variable of the `Do` loop, which iterates over data-points. `Do[ code , {y, Rest@data} ]`, so in the `stnew = alpha y + (1-alpha)(st + bt)`, variable `y` is a data-point.2011-12-22

1 Answers 1

4

(Partly so this question has an answer...)

Single exponential smoothing can be used when you want to do short-range forecasting of, say, your company's sales over the next few quarters. The formula is $S_{t+1} = \alpha y_t + (1-\alpha)S_t,$ where $S_t$ is the predicted sales at time $t$, $y_t$ is the actual sales at time $t$, and $\alpha$ is the smoothing parameter. Unrolling the recurrence in this formula shows that $S_{t+1}$ is a weighted average of the initial sales prediction and all of the observed sales values through time $t$. The term "exponential smoothing" is used because, for a particular sales value, its weight in the averaging process decays exponentially as you move forward in time. You choose the value of $\alpha$ depending on how much you want to weight the most recent sales values vs. the past ones when predicting next quarter's. If you want to weight the recent sales values more, you choose a larger value of $\alpha$. You have to have $\alpha$ between $0$ and $1$, though.

One of the biggest drawbacks of exponential smoothing is that if, say, the sales are increasing in time, the formula will always underpredict the next quarter's sales. This is because each prediction is a weighted average of everything you've already seen: It can't be bigger than the largest of all the previous values. In the presence of a trend like this, we can combat this problem by using double exponential smoothing, or exponential smoothing with trend. The equations are $\begin{align} S_t &= \alpha y_t + (1-\alpha)(S_{t-1} + b_{t-1}), \\ b_t &= \gamma (S_t - S_{t-1}) + (1 - \gamma) b_{t-1}, \\ F_{t+1} &= S_t + b_t, \end{align}$ where $b_t$ is the predicted trend value for time $t$ and $F_{t+1}$ is now the predicted sales forecast for time $t+1$. Here, $\alpha$ has the same interpretation as before; it measures how much you want to weight recent sales values vs. past sales, with higher values of $\alpha$ corresponding to more weight on recent values. And $\gamma$ has a similar interpretation, just with trend instead of sales: It measures how much you want to weight recent trend values vs. past trend values, with higher values of $\gamma$ corresponding to more weight on recent trend values.

Choosing the right smoothing values is an art in itself. Values around $0.1$ to $0.2$ are fairly common, I think, but it really depends on how much you want to weight recent values vs. past values. If you want to get fancier you could try predicting the sales values you currently have with different values of $\alpha$ and $\gamma$ and then choosing those values that minimize the error in the predictions. Even fancier would be to set up an optimization problem that would find the values of $\alpha$ and $\gamma$ that minimize error when predicting the data values you currently have, but that's a whole other question.

  • 0
    @Joe: I'm glad it was helpful. I first learned this when I had to teach forecasting to business students, and so I always have business applications running around in my head when I think about it. But, as you know, the ideas can be applied in lots of other areas.2011-12-23