0
$\begingroup$

What is the mean euclidean distane between two points on the plane which coordinates are normally distributed?

I'm assuming this would be

$\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{e^{-\frac{(\text{x1}-\mu )^2}{2 \sigma ^2}} e^{-\frac{(\text{x2}-\mu )^2}{2 \sigma ^2}} e^{-\frac{(\text{y1}-\mu )^2}{2 \sigma ^2}} e^{-\frac{(\text{y2}-\mu )^2}{2 \sigma ^2}} \sqrt{(\text{x1}-\text{x2})^2+(\text{y1}-\text{y2})^2}}{\left(\sqrt{ 2 \pi } \sigma \right) \left(\sqrt{2 \pi } \sigma \right) \left(\sqrt{2 \pi } \sigma \right) \left(\sqrt{2 \pi } \sigma \right)}d\text{y2}d\text{y1}d\text{x2}d\text{x1}$

Does this integral have a functional representation?

UPDATE: After doing some numerical exprimentaion I'm guessing that the answer is $ \sigma \sqrt{\pi } $. But I don't know how to solve this integral.

  • 1
    Note that if $X \sim N(0,1)$ and $Y \sim N(0,1)$, then $X^2+Y^2 \sim \chi^2(2)$.2011-01-07

2 Answers 2

3

If $x_1$ and $x_2$ are normally distributed with mean $\mu$ and variance $\sigma^2$, then $x_1 - x_2$ is normally distributed with mean $0$ and variance $2\sigma^2$. Put $X = x_1-x_2$ and $Y = y_1 - y_2$. Then $(X,Y)$ is a symmetric bivariate normal distribution with mean $(0,0)$ and variance $(2\sigma^2,2\sigma^2)$; the pdf is

$\frac{1}{4\pi\sigma^2} e^{-\frac{1}{4\sigma^2}R^2}$

where $R = \sqrt{X^2+Y^2}$. The pdf of the radius $R$ is

$\frac{R}{2\sigma^2} e^{-\frac{1}{4\sigma^2}R^2}$

The mean of this distribution is

$\frac{1}{2\sigma^2} \int_0^\infty R^2e^{-\frac{1}{4\sigma^2}R^2} dR$

which is indeed $\sigma\sqrt\pi$.

  • 0
    I started working out my answer, but you were faster. +12011-01-07
1

I needed to extend Tony's result to $n$ dimensions. It turns out that the generalized result directly follows from the mean of a Nakagami distribution with $m =\frac{n}{2}$ and $\Omega = 2n\sigma^2$.

That is, the average distance between 2 normally distributed points in $n$ dimensions (where all dimensions share the same mean $\mu$ and variance $\sigma^2$) is:

$2\sigma\frac{\Gamma(\frac{n + 1}{2})}{\Gamma(\frac{n}{2})}$

For the 2D case, this simplifies to $\sigma\sqrt{\pi}$ (Tony's answer). The 3D case simplifies to $\frac{4\sigma}{\sqrt{\pi}}$, the 4D case to $\frac{6\sigma\sqrt{\pi}}{4}$, and so forth.

As $n$ grows large, the result can be approximated by:

$\approx 2\sigma\sqrt{\frac{n}{2}}$

Note that if the variances are not the same for each dimension, there is no "short" answer.

  • 0
    +1, but could you include your proof (or at least a sketch thereof)?2012-11-08