3
$\begingroup$

Possible Duplicate:
Motivation behind standard deviation?

In statistics very often you see something of the sort: $$ \textrm{quantity}=\sqrt{\frac {\sum(x-\mu)^2} {N}} $$ to measure things like standard deviation ($\mu$ is the mean here).

It seems that just making an absolute value of the difference will give us a pretty good measure of the same thing: $$ \textrm{quantity}=\frac {\sum{\Bigl|x-\mu\Bigr|}} {N} $$

How did we end up with those squares?

  • 0
    That in fact is also a measure which is used. Least squares however has the advantage it is easy to work with2012-12-29
  • 0
    I thought it had something to do with Euclidean norms. At least this one question on variance during a statistics course I took convinced me so.2012-12-29
  • 0
    A statistics teacher once told me that squares are good because they penalize very far values for being far; i.e. squares are a compromise between the "very forgiving" absolute values, that treat many small deviations the same as one large deviation and the "very unforgiving" maximum, that completely ignores everything but the largest deviation. I don't know if this makes any mathematical sense, or is just a retroactive justification.2012-12-29
  • 1
    Be sure to check out http://stats.stackexchange.com/questions/118/why-square-the-difference-instead-of-taking-the-absolute-value-in-standard-devia2012-12-29

3 Answers 3