In 2D or 3D I have a fix point y and Gaussian distribution of a random point x. I am now interested in the mean euclidean distance between x and y:
$E_x [ d(x,y) ] = \int _{-\infty }^{\infty } d(x,y) N(x; \mu, \Lambda )dx$
Many thanks in advance
In 2D or 3D I have a fix point y and Gaussian distribution of a random point x. I am now interested in the mean euclidean distance between x and y:
$E_x [ d(x,y) ] = \int _{-\infty }^{\infty } d(x,y) N(x; \mu, \Lambda )dx$
Many thanks in advance
You can define a new random variable $\tilde{X} = X-y$. Then the quantity you are interested in is really the expected distance of this random variable from the origin. Let $\tilde{X}$ be the column vector $[x_1 x_2 \dots x_n]^T$. Then the squared distance from the origin is $\tilde{X}^T\tilde{X}$. For the case when the components of $\tilde{X}$ are independent, this expectation is a non-central chi-square distribution with $n$ degrees of freedom. For the case when $\tilde{X}$ has a general (non-diagonal) covariance matrix, the distribution of $\tilde{X}^T \tilde{X}$ will be a generalized chi-square distribution. If you are interested in the expected value of the absolute distance (i.e., $E(\sqrt{\tilde{X}^T \tilde{X}})$), try looking at the chi distribution. The chi distribution is the square root of the chi-square distribution and hence will give you the expected absolute distance if the components of $\tilde{X}$ are independent. If they are not, there should be a corresponding generalized chi distribution that is the square root of the generalized chi-square distribution.
Since I could not find a formula for an expectation value of a generalized chi square distribution, I decided to ignore the dependency between the components of $\hat{X}$ and continue with a diagonal covariance matrix. This then gives me:
$E_{\tilde{X}} \left[ \sqrt{ \tilde{X}^T \tilde{X} } \right] = E_X \left[ \sqrt{ \sum_{i=1}^n \tilde{x}^2_i } \right]$
However each $\tilde{x}_i$ still has its own variance $\sigma_i^2$. Therefore I guess the expectation value of the non-central chi distribution won't help here since I can not factor out one common variance.
My approach now is the following: Since I need this expectation value as part of an upper bound, I apply the triangle inequality in order to approximate the square root of a sum:
$E_{\tilde{X}} \left[ \sqrt{ \sum_{i=1}^n \tilde{x}^2_i } \right] \le E_{\tilde{X}} \left[ \sum_{i=1}^n \sqrt{ \tilde{x}_i^2 } \right] = \sum_{i=1}^n E_{\tilde{x_i}} \left[ \| \tilde{x}_i \| \right]$
I think this is equivalent to exchanging the euclidean norm with the manhattan norm.
What do you think about this idea? Is there maybe another approximation which might be more close to the real expectation value?
Try a change of variable to get the density of a standard multivariate normal multiplied by something else. It should make the problem clearer.