Let's say I have a collection of data points (X & Y values) that show some correlation when, eg, Pearson's correlation formula is applied. What is a good measure for determining which data points are "wild ducks" in the collection?
I would guess that the orthogonal distance from the data point to the line of the computed correlation formula would be a good measure, but I've not found any formula for computing this (and I'm kind of hoping it's a common concept that, eg, Excel knows about, if I can just pin a name on it).
(I realize that this is a bit of a novice question, but I've searched Google and here and not found anything, probably because I'm not up on the lingo. And because "correlation" gets 85 million hits.)