7
$\begingroup$

For two Gaussian-distributed variables, $ Pr(X=x) = \frac{1}{\sqrt{2\pi}\sigma_0}e^{-\frac{(x-x_0)^2}{2\sigma_0^2}}$ and $ Pr(Y=y) = \frac{1}{\sqrt{2\pi}\sigma_1}e^{-\frac{(x-x_1)^2}{2\sigma_1^2}}$. What is probability of the case X > Y?

  • 1
    What do you know about $X-Y$ ?2012-08-03
  • 0
    If $X > Y$ what can you say about $X-Y$?2012-08-03
  • 3
    Are $X$ and $Y$ independent? And I don't agree when you write $Pr(X=x)=\dots$: the probability that a Gaussian random variable take a particular value is $0$ (but we can write $P(X\in A)=\int_A$ of the function you wrote).2012-08-03
  • 0
    Yes, do we have a name for this?2012-08-03
  • 0
    A name for *what*?2012-08-03
  • 0
    @Strin: [Probability density function](http://en.wikipedia.org/wiki/Probability_density_function)?2012-08-03
  • 0
    @DavideGiraudo: You are quite right, but this is a common abuse of notation.2012-08-03
  • 0
    @Nate: A common abuse of notation, to write $P(X=x)=f(x)$ to mean that $f$ is the density of $P_X$ with respect to Lebesgue measure? If you wish to indicate that we should not pay attention, I disagree. But this a too common **error**, yes.2012-08-03
  • 0
    @did: Many otherwise reputable textbooks use this notation deliberately. I personally don't like it either since it is, as you say, also a common error. But I just wanted to point out that someone who writes $P(X=x)$ for a density is not *necessarily* confused.2012-08-03
  • 0
    @Nate: *Many otherwise reputable textbooks use this notation deliberately*... OK. (But not in the part of the world where I live.) Any examples?2012-08-03

2 Answers 2

7

Suppose $X$ and $Y$ are jointly normal, i.e. no independence is needed. Define $Z = X - Y$. It is well known that $Z$ is Gaussian, and thus is determined by its mean $\mu$ and its variance $\sigma^2$. $$ \mu = \mathbb{E}(Z) = \mathbb{E}(X) - \mathbb{E}(Y) = \mu_1 - \mu_2 $$ $$ \sigma^2 = \mathbb{Var}(Z) = \mathbb{Var}(X) + \mathbb{Var}(Y) - 2 \mathbb{Cov}(X,Y) = \sigma_1^2 + \sigma_2^2 - 2 \rho \sigma_1 \sigma_2 $$ where $\rho$ is the correlation coefficient. Now: $$ \mathbb{P}(X>Y) = \mathbb{P}(Z>0) = 1- \Phi\left(-\frac{\mu}{ \sigma}\right) = \Phi\left(\frac{\mu}{ \sigma}\right) = \frac{1}{2} \operatorname{erfc}\left(-\frac{\mu}{\sqrt{2}\sigma}\right) $$

  • 2
    I believe the $\sqrt{2}$ should not appear in the standard CDFs, at the last line.2018-06-03
2

I assume that $X$ and $Y$ are independent. Let $Z=X-Y$ then $Z\sim\cal{N}(x_0-y_0,\sigma_0^2+\sigma_1^2)$. Accordingly

$$P(Z>0)=\int_0^\infty\frac{1}{\sqrt{2\pi(\sigma_0^2+\sigma_1^2)}}\exp\left(\frac{-(z-x_0+y_0)^2}{2(\sigma_0^2+\sigma_1^2)}\right)\mathrm{d}z$$

if we use the complementary error function $$\operatorname{erf}c(x)=\frac{2}{\sqrt\pi}\int_x^\infty e^{-t^2}dt$$ with $t=\frac{z-x_0+y_0}{\sqrt{2(\sigma_0^2+\sigma_1^2)}}$, we have $\sqrt{2(\sigma_0^2+\sigma_1^2)}dt=dz$ $$P(Z>0)=\frac{2}{2\sqrt{\pi}\sqrt{2(\sigma_0^2+\sigma_1^2)}}\int_{t=\frac{y_0-x_0}{\sqrt{2(\sigma_0^2+\sigma_1^2)}}}^\infty e^{-t^2}\sqrt{2(\sigma_0^2+\sigma_1^2)}dt$$ and we get finally $$P(Z>0)=\frac{1}{2}\operatorname{erfc}\left(\frac{y_0-x_0}{\sqrt{2(\sigma_0^2+\sigma_1^2)}}\right)$$

  • 3
    Which has the odd feature of not being $1/2$ when $x_0=y_0$. I suggest to review this answer, especially the change of variable.2012-08-03
  • 0
    Ok I found the mistake. Updating.2012-08-03
  • 0
    Well, there still seems to be something wrong.The odd feature noted by @did still persists: the probability is not $1/2$ when $x_0 = y_0$ and worse yet, the right side is negative when $x_0 > y_0$ and so definitely cannot be a probability. Did you mean to write erfc instead of erf in your final answer?2012-08-03
  • 0
    Yes it has to be erfc of course. It is obvious in the text. Proof is correct.2012-08-03