7
$\begingroup$

I'm just beginning to learn topology, and there's something that I realize has been nagging at me since roughly the eight grade.

Why does the Euclidean distance in N-dimensional space involve a bunch of squaring and square-rooting?

I understand it, mind you.

  1. In 1-space the "distance" is $\sqrt{\Delta x^2}$ is which is just $\Delta x$.
  2. In two-space, you're doing the Pythagorean theorem.
  3. In 3-space, I visualize it like this: you want to find the distance $\Delta x, \Delta y, \Delta z$ so you:

    • let $a = \sqrt{\Delta x^2 + \Delta y^2}$, $a$ is the distance along the $x, y$ plane
    • the total distance is $\sqrt{a^2 + \Delta z^2}$
    • the above expands to $\sqrt{\Delta x^2 + \Delta y^2 + \Delta z^2}$

To generalize, faced with $n$ dimensions, you pick two, find the distance along those two dimensions, plop a point down there, and repeat (until you have only one dimension left, at which point you're done.)

However, I don't understand why distance in n-space requires repeated squaring. Shouldn't there be a distance formula that requires cubing and the cube-root in three-space, or quading (is that a thing) and quad-roots in four-space, and so on? Just in the interest of symmetry, it seems weird that squares get special treatment.

I know this is a very philosophical question, but is there a way to find Euclidean distances in n-space that involves taking the nth power and nth root instead of repeatedly projecting down a dimension?

2 Answers 2

11

There are other metrics than the Euclidean one. In particular, you can define the "$\ell^p$ norm" for any $p \ge 1$, where the distance from $(x_1, x_2, \ldots, x_n)$ to $(y_1, y_2, \ldots, y_n)$ is $(|x_1 - y_1|^p + |x_2 - y_2|^p + \ldots + |x_n - y_n|^p)^{1/p}$. What is special about the Euclidean metric is that it allows rotations through arbitrary angles.

  • 3
    What Robert means is that the Euclidean distance of two points will be the same before and after you rotate them. This is not necessarily the case for other metrics.2012-05-21
5

An explanation with some algebraic flavor is through linear transformations and inner products; these inner products $\langle\cdot,\cdot\rangle$ are fundamental objects for vector spaces as they can quantitatively measure linear independence through bilinearity. On the assumption of a basis for our space we can define the inner product to be the dot product $x\cdot y=\sum_ix_iy_i$; this is the unique inner product for which $\langle e_i,e_i\rangle=1$ for each basis vector $e_i$. However, being linear in both arguments means that scalar multiplication gives $\langle \lambda v,\lambda v\rangle=\lambda^2\langle v,v\rangle$. In order to force this to be scalar multiplicative we need to set $\|v\|=\sqrt{\langle v,v\rangle}$, so that $\|\lambda v\| = |\lambda|\,\|v\|$.

So ultimately, this way of explanation boils down to: a translation-invariant metric is equivalent to having a norm on the space, and the Euclidean norm will be a special case of "measuring" the linear independence of a vector with itself, in the sense of an inner product, so the presence of the distinguished power/root of two is because one and one (a vector and its copy) make two.