4
$\begingroup$

Is there an analytical or approximate solution of the following integral?

$$ \int_{-\infty}^{\infty}\int_{y-d}^{y+d}\exp\big(-{(x-\mu_1)^2}/{2\sigma^2}\big) \exp\big(-{(y-\mu_2)^2}/{2\sigma^2}\big) \,\mathrm{d}x\,\mathrm{d}y $$

  • 1
    Is $x$ supposed to be in the argument of *both* of the exponentials?2011-05-30
  • 0
    @cardinal: Sorry I made a mistake, in first exponent x is the variable and in the second exponent y2011-05-30
  • 0
    No problem. Just wanted to make sure. :)2011-05-30
  • 0
    In general, no analytical solution exists. This was proved by Liouville in the early 1800s.2011-05-30

2 Answers 2

1

You can solve this using the same approach used in this answer.

EDIT 1:

The iterated integral can be expressed simply as $$ I = 2\pi \sigma ^2 \bigg[\Phi \bigg(\frac{{d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }}\bigg) - \Phi \bigg(\frac{{ - d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }}\bigg)\bigg], $$ where $\Phi$ is the distribution function of the standard normal distribution (confirmed numerically). I'll provide the derivation later on.

EDIT 2: Before I give the derivation of this result, here are a few numerical results confirming it. Let $r_1 = r_1(d,\mu_1,\mu_2,\sigma)$ denote the approximation obtained for the iterated integral and $r_2 = r_2(d,\mu_1,\mu_2,\sigma)$ the approximation obtained for the simple expression of $I$ above. The following results were obtained: $$ d=1.2, \mu_1=2.8, \mu_2=1.6, \sigma=1.9: r_1 = 7.125001170659572 , r_2 = 7.125000629782468 $$ $$ d=1.9, \mu_1=2.4, \mu_2=-4.6, \sigma=1.5: r_1 = 0.11438563902950714 , r_2 = 0.11438586127326182 $$ $$ d = 3.2, \mu_1=-0.4, \mu_2=8.1, \sigma=2.9: r_1 = 5.070682528645151 , r_2 = 5.070686434762869 $$ $$ d=5.2, \mu_1=-1.4, \mu_2=-3.1,\sigma=0.8: r_1 = 4.017263196283472 , r_2 = 4.017262629040601. $$

EDIT 3:

Proof: First write the iterated integral as $$ I = 2\pi \sigma ^2 \int_{ - \infty }^\infty {\bigg[\int_{y - d}^{y + d} {f_X (x)\,dx} \bigg]f_Y (y)\,dy} , $$ where $f_X$ and $f_Y$ are the probability density functions of the ${\rm N}(\mu_1,\sigma^2)$ and ${\rm N}(\mu_2,\sigma^2)$ distributions, respectively. Hence, $$ I = 2\pi \sigma ^2 \int_{ - \infty }^\infty {{\rm P}( - d \le X - y \le d)f_Y (y)\,dy} , $$ where $X \sim {\rm N}(\mu_1,\sigma_2)$. In turn, by the law of total probability, $$ I = 2\pi \sigma ^2 {\rm P}( - d \le X - Y \le d), $$ where $Y \sim {\rm N}(\mu_2,\sigma^2)$ and is independent of $X$. Finally, since $X-Y \sim {\rm N}(\mu_1 - \mu_2,2\sigma^2)$, $$ I = 2\pi \sigma ^2 {\rm P}\bigg(\frac{{ - d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }} \le Z \le \frac{{d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }}\bigg), $$ where $Z$ is a standard normal random variable. The result is thus established.

  • 0
    Thanks a lot Mr Shai Covo, can you please explain me how you have obtained the above expression out of iterated integral? The first exponent is a function of x whereas the second exponent is the function of y, did you notice that?2011-05-30
  • 0
    shaikh: Yes, I noticed that; I'll give the details soon.2011-05-30
  • 0
    @Covo: it seems quite accurate. I am desperately waiting for the derivation... :)2011-05-30
  • 0
    @Covo: Thanks a lot for your help. I could not get your 3rd step of proof which includes the total law of probability. How can $$I = 2\pi \sigma ^2 \int_{ - \infty }^\infty {{\rm P}( - d \le X - y \le d)f_Y (y)\,dy} $$ be transferred to $$ I = 2\pi \sigma ^2 {\rm P}( - d \le X - Y \le d)$$2011-05-30
  • 0
    shaikh: By the law of total probability, conditioning on $Y$, you get ${\rm P}( - d \le X - Y \le d) = \int_{ - \infty }^\infty {{\rm P}( - d \le X - Y \le d|Y = y)f_Y (y)\,dy} $. Now note that $X$ and $Y$ are independent, to conclude the result.2011-05-30
  • 0
    Covo: Sory for asking again and again. I am unable to get the approximate answer that you have obtained above using the values of parameters. Do I need to use integral to solve the standard normal distribution as shown below? $$\frac{1}{\sqrt{2\pi}}\int_0^{\frac{d-(\mu _1 - \mu _2)}{\sqrt {2\sigma ^2 }}} exp^{\frac{-x^2}{2}}dx$$2011-05-30
  • 0
    shaikh: The answer is given in terms of the distribution function $\Phi$ of the ${\rm N}(0,1)$ distribution, that is $\Phi(x)=\frac{1}{{\sqrt {2\pi } }}\int_{ - \infty }^x {e^{ - u^2 /2} \,du}$. Alternatively, you can use the error function (erf), but this is essentially the same. This is the simplest you can get.2011-05-30
  • 0
    And as for the integral in your last comment, note that $\Phi (\frac{{d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }}) = \frac{1}{{\sqrt {2\pi } }}\int_{ - \infty }^{\frac{{d - (\mu _1 - \mu _2 )}}{{\sqrt {2\sigma ^2 } }}} {\exp ( - x^2 /2)dx} $: the lower bound of the integral is $-\infty$, not $0$. Again, I stress that you cannot get a simpler solution (although you can approximate the function $\Phi$).2011-05-30
  • 0
    @Covo: Although I tried a lot to understand the law of total probability but I could not. Can you please send me some source from where I can understand this law, in order to understand the above solution. Thanks2011-05-31
  • 1
    @shaikh: Later on today (but not soon), I will add some details concerning the law of total probability.2011-05-31
1

In response to OP's comment, I elaborate here on the equality $$ {\rm P}( - d \le X - Y \le d) = \int_{ - \infty }^\infty {{\rm P}( - d \le X - y \le d)f_Y (y)\,dy}. $$ A particular case of the law of total probability allows us to compute the probability of event by conditioning on a suitable random variable, as follows: $$ {\rm P}(A) = \int_{ - \infty }^\infty {{\rm P}(A|Z = z)f_Z (z)\,dz} , $$ where $f_Z$ is the probability density function of $Z$. (See also p. 1 here; the law of total probability is a special case of the law of total expectation.) Now, letting $A$ be the event that $-d \leq X-Y \leq d$ and letting $Z=Y$, the above equation yields $$ {\rm P}( - d \le X - Y \le d) = \int_{ - \infty }^\infty {{\rm P}(- d \le X - Y \le d|Y = y)f_Y (y)\,dy}. $$ Since $X$ and $Y$ are independent, $$ {\rm P}( - d \le X - Y \le d|Y = y) = {\rm P}( - d \le X - y \le d). $$ The desired equality is thus established.

  • 0
    @Covo: I must say great. A lot of thanks for your time and efforts. I got it now. Again thanks for providing all these explanations.2011-06-01
  • 0
    @shaikh: Thanks, glad to help!2011-06-01
  • 0
    @Covo: How did you find the approximated numerical results for the iterated integral in your first answer Edit2? Which approximation did you use?2011-06-01
  • 0
    @shaikh: I have programmed the approximations in Java. If we define $\varphi(y)=\varphi(y;d,\mu_1,\sigma)$ by $\varphi (y) = \int_{y - d}^{y + d} {\exp ( - \frac{{(x - \mu _1 )^2 }}{{2\sigma ^2 }})dx} $, then the iterated integral becomes $\int_{ - \infty }^\infty {\varphi (y)\exp ( - \frac{{(y - \mu _2 )^2 }}{{2\sigma ^2 }})dy} $. So, I programmed a function that returns an approximated value for $\varphi(y)$, thus reducing the problem to approximating an ordinary improper integral from $-\infty$ to $\infty$; as the limits of integration I took $\mp 100$ (instead of $\mp \infty$).2011-06-01
  • 0
    @Covo: Is there any predefined library exist in java for such an approximation or you have used some approximations of error or normal function to get the value of $$\varphi(y)$$. It seems you are genius :)2011-06-01
  • 0
    I used a standard approximation technique. I'll give the details soon.2011-06-01
  • 1
    public static double getPhi(double y, double d, double mu1, double sigma){ double s=0; int N=500; double h=(2*d)/N; double x=0; for(int i=0; i2011-06-01
  • 0
    @shaikh: Above is the Java code I used to obtain the approximation for $\varphi(y)$. Roughly speaking, the larger N is the more accurate is the approximation. Further note that h above corresponds to $dx$ (the larger N is the smaller h is).2011-06-01
  • 0
    @Covo: Thank you sir. I got your approach. May God bless you for all this help.2011-06-01