2
$\begingroup$

Given values of d, p and $\sigma$, is it possible to calculate the value of $\mu$?

$$1-\frac{1}{2\pi\sigma^2}\int_{-\infty}^{\infty}\int_{y-d}^{y+d}\exp\big(-{x^2}/{2\sigma^2}\big) \exp\big(-{(y-\mu)^2}/{2\sigma^2}\big) \,\mathrm{d}x\,\mathrm{d}y < p$$

  • 0
    Presumably there is more than one value of $\mu$ since you have an inequality as opposed to an equation...2011-07-17
  • 0
    Yes, I am looking for exactly one value of $\mu$ for the above inequality. Since $\mu$ is constant with respect to integration variables, if it is possible to separate $\mu$ from the integral function then the above inequality can be easily solved.2011-07-17

3 Answers 3

2

It can be easily shown (using the law of total probability)* that $$ \frac{1}{{2\pi \sigma ^2 }}\int_{ - \infty }^\infty {\int_{ y-d }^{y+d} {\exp \bigg( - \frac{{x^2 }}{{2\sigma ^2 }}\bigg)\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg) {\rm d}x} \,{\rm d}y} = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg), $$ where $\Phi$ is the distribution function of the ${\rm N}(0,1)$ distribution. Noting that the right-hand side is maximized when $\mu = 0$ (indeed, consider the integral of the ${\rm N}(0,1)$ pdf over the fixed length interval $[\frac{{\mu - d}}{{\sqrt {2\sigma ^2 } }},\frac{{\mu + d}}{{\sqrt {2\sigma ^2 } }}]$), it follows that a necessary condition for your inequality to hold is $$ \Phi \bigg(\frac{{d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{-d}}{{\sqrt {2\sigma^ 2} }}\bigg) > 1 - p. $$ On the other hand, if this condition is satisfied, then your inequality holds with $\mu=0$.

To summarize: The inequality holds for some $\mu \in \mathbb{R}$ if and only if it holds for $\mu=0$; the inequality for $\mu = 0$ is equivalent to $$ \Phi \bigg(\frac{{d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{-d}}{{\sqrt {2\sigma^ 2} }}\bigg) > 1 - p. $$

EDIT (in view of your comment below Sasha's answer): Assume that the necessary condition above is satisfied. The function $f$ defined by $$ f(\mu ) = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg) $$ is decreasing in $\mu \in [0,\infty)$, with $f(\mu) \to 0$ as $\mu \to \infty$. By our assumption, $f(0) > 1-p$. So if you are interested in a $\mu > 0$ such that $f(\mu) \approx 1-p$, you need to find $\mu_1,\mu_2 > 0$ such that $f(\mu_1) > 1- p$ and $f(\mu_2) < 1-p$, and $f(\mu_1) - f(\mu_2) \approx 0$. Then, for any $\mu \in (\mu_1,\mu_2)$, $f(\mu) \approx 1-p$.

* EDIT: Derivation of the first equation above. Denote the left-hand side of that equation by $I$. First write $I$ as $$ I = \int_{ - \infty }^\infty {\bigg[\int_{y - d}^{y + d} {\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{x^2 }}{{2\sigma ^2 }}\bigg){\rm d}x} \bigg]\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg){\rm d}y} . $$ Then $$ I = \int_{ - \infty }^\infty {{\rm P}( - d \le X - y \le d)\frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg){\rm d}y} , $$ where $X$ is a ${\rm N}(0,\sigma^2)$ random variable. If $Y$ is a ${\rm N}(\mu,\sigma^2)$ random variable independent of $X$, then, by the law of total probability, $$ {\rm P}( - d \le X - Y \le d) = \int_{ - \infty }^\infty {{\rm P}( - d \le X - Y \le d|Y = y)f_Y (y)\,{\rm d}y} = I, $$ where $f_Y$ is the pdf of $Y$, given by $$ f_Y (y) = \frac{1}{{\sqrt {2\pi \sigma ^2 } }}\exp \bigg( - \frac{{(y - \mu )^2 }}{{2\sigma ^2 }}\bigg), $$ and where for the last equality ($\int_{ - \infty }^\infty \cdot =I$) we also used the independence of $X$ and $Y$. Now, $X-Y \sim {\rm N}(-\mu,2\sigma^2)$; hence $$ \frac{{(X - Y) - ( - \mu )}}{{\sqrt {2\sigma ^2 } }} \sim {\rm N}(0,1), $$ and, in turn, $$ I = {\rm P}\bigg(\frac{{ - d - ( - \mu )}}{{\sqrt {2\sigma ^2 } }} \le Z \le \frac{{d - ( - \mu )}}{{\sqrt {2\sigma ^2 } }}\bigg) = {\rm P}\bigg(\frac{{\mu - d}}{{\sqrt {2\sigma ^2 } }} \le Z \le \frac{{\mu + d}}{{\sqrt {2\sigma ^2 } }}\bigg), $$ where $Z \sim {\rm N}(0,1)$. Thus, finally, $$ I = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg). $$

  • 0
    @Covo: But I am interested in the value of $\mu$ for which the inequality holds2011-07-17
  • 0
    shaikh: See the edit (beginning from "To summarize:"); it suffices to consider the case $\mu=0$.2011-07-17
  • 0
    @Covo: How to find $\mu_1$ and $\mu_2$?2011-07-17
  • 0
    shaikh: From a practical point of view, you can use a Normal distribution function calculator, for example http://davidmlane.com/hyperstat/z_table.html2011-07-17
  • 0
    @Covo: Since I would like to use the value of $\mu$ for further calculations, using table would not be a practical solution. The only possible way is to test several values of $\mu$ to find the desired value of $\mu$ such that $f(\mu) \approx 1-p$. Am I right?2011-07-17
  • 1
    shaikh: You can find extremly accurate approximations for the solution $\mu$ of $f(\mu)=1-p$, using certain numerical tools available online. I'll give details later on (but not very soon).2011-07-17
  • 0
    @Covo: Thanks for your derivation. I am waiting for the approximation of $f(\mu) \approx 1-p$2011-07-18
  • 0
    The approximation for the solution $\mu$ of $f(\mu)=1-p$ is considered in the new answer.2011-07-18
3

Your double integral can be evaluated in closed form. This is done by evaluating $x$ integral first, differentiating with respect to $d$, and remembering that for $d=0$ the integral vanishes. Then carrying out $y$ integration, and then integration with respect to $d$ from zero to $d$. Using Mathematica:

In[19]:= 1 - 
 Integrate[
  Integrate[
   D[1/(2 Pi si^2)
       Integrate[
       Exp[-x^2/(2 si^2)] Exp[-(y - mu)^2/(2 si^2)], {x, y - dd, 
        y + dd}], dd] // FullSimplify, {y, -Infinity, Infinity}, 
   Assumptions -> si > 0], {dd, 0, d}]

Out[19]= 1 + 1/2 (-Erf[(d - mu)/(2 si)] - Erf[(d + mu)/(2 si)])

Hence your problem becomes $$1-\frac{1}{2} \left( \text{erf}\left( \frac{\mu+d}{2\sigma} \right) + \text{erf}\left( \frac{d-\mu}{2\sigma} \right) \right) < p$$.

From this inequality one may deduce the implies inequality for $\mu$. Here is an example:

enter image description here

  • 0
    Thanks a lot for your reply. I am interested in such a positive value of $\mu$ such that the above inequality becomes true. In your graph, this is the point where p and error function intersects i.e. approx $\mu$ = 1.8. Is it possible to find this value by solving inequality algebrically?2011-07-17
  • 3
    Since you are interested in the intersection point, then you really mean to solve equality, rather then inequality. The solution can not be solved algebraically, I am afraid.2011-07-17
3

Let $I$, as in my first answer, denote the iterated integral (including the factor $\frac{1}{{2\pi \sigma ^2 }}$). Using probabilistic arguments, I obtained $$ I = \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg), $$ where $\Phi$ is the distribution function of the ${\rm N}(0,1)$ distribution. On the other hand, using Mathematica, Sasha obtained $$ I = \frac{1}{2}\bigg({\rm erf}\bigg(\frac{{\mu + d}}{{2\sigma }}\bigg) + {\rm erf}\bigg(\frac{{d - \mu }}{{2\sigma }}\bigg)\bigg). $$ Here ${\rm erf}$ is the error function, defined by $$ {\rm erf}(x) = \frac{2}{{\sqrt \pi }}\int_0^x {e^{ - t^2 } \,dt} , \;\; x \in \mathbb{R}. $$ (Note that ${\rm erf}(-x)=-{\rm erf}(x)$.) So, let's show that indeed $$ \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg) = \frac{1}{2}\bigg({\rm erf}\bigg(\frac{{\mu + d}}{{2\sigma }}\bigg) + {\rm erf}\bigg(\frac{{d - \mu }}{{2\sigma }}\bigg)\bigg). $$ From the standard relation $$ \Phi (x) = \frac{1}{2}\bigg[1 + {\rm erf}\bigg(\frac{x}{{\sqrt 2 }}\bigg)\bigg], $$ we get $$ \Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg) = \frac{1}{2}\bigg({\rm erf}\bigg(\frac{{\mu + d}}{{2\sigma }}\bigg) - {\rm erf}\bigg(\frac{{\mu - d}}{{2\sigma }}\bigg)\bigg), $$ and hence the desired equality follows from $$ {\rm erf}\bigg(\frac{{d - \mu }}{{2\sigma }}\bigg) = - {\rm erf}\bigg(\frac{{\mu - d}}{{2\sigma }}\bigg). $$ Now, as in my first answer, define a function $f$ by $$ f(\mu):=\Phi \bigg(\frac{{\mu + d}}{{\sqrt {2\sigma^ 2} }}\bigg) - \Phi \bigg(\frac{{\mu - d}}{{\sqrt {2\sigma^ 2} }}\bigg) = \frac{1}{2}\bigg({\rm erf}\bigg(\frac{{\mu + d}}{{2\sigma }}\bigg) - {\rm erf}\bigg(\frac{{\mu - d}}{{2\sigma }}\bigg)\bigg). $$ Recall that $f$ is decreasing in $\mu \in [0,\infty)$, with $f(\mu) \to 0$ as $\mu \to \infty$. So if $f(0) > 1-p$, there exists a solution $\mu > 0$ to $f(\mu)=1-p$. You can find an extremely accurate approximation to $\mu$ using, for example, Wolfram Alpha (based on the representation using the error function).

  • 0
    @Covo: That's great, but Wolfram Alpha is again a graphical solution and I think that there is no direct algebraic solution for some approximate value of $\mu$. In order to use it in my program I need to test several values of $\mu$ before arriving at final approximate value. Thanks a lot for your help :)2011-07-18
  • 1
    shaikh: Indeed, this is a problem for numerical analysis. The Bisection method might be relevant: http://en.wikipedia.org/wiki/Bisection_method2011-07-18