Given a point in a plane, that is $x_1$ meters to the north of you and $x_2$ meters to the East.
The distance is simply calculated by using the pythagoean theorem by $\sqrt{x_1^2 + x_2^2}$.
This can be extended to higher dimensions, by adding more squares under the root. This is then called the euclidean length of a vector $x$.
Usually, the euclidean distance of an error is used to define the accuracy.
Meaning: You have a known solution $s$ and an approximation $x$. You define your error $e = \|s-x\|_2$, since that is an easy and consistant way to map a multidimensional vector to a single posiive number.
This error $e$ can than be used as objective-function in a minimization-process to determine parameters, that helped you obtain $x$ in the first place.
(I think, that that is, what is done at machine-learning, I'm no expert in that field)