I have recently stumbled onto some empirical (forecasting error) data that should be normally distributed. However, the normal distribution fits relatively poorly due to the abundance of data points in the tails. On the other hand, the t-distribution (df=2) fits virtually perfectly.
I need to know how to explain it if I wish to use it for modelling purposes.
Is there a logic to this, or am I "over-fitting" the data? I would really like to know why this is happening and where this can occur in other real-world samples.
Note: I'm sorry I cannot share the exact nature of the data, but I can say it has over 15,000 data points.