4
$\begingroup$

Let $Y_1

I am not exactly sure how to solve the question above. Any help would be appreciated. Thanks.

  • 2
    $P(Y_1\lt\mu\lt Y_2)=0$, not $\frac12$.2011-11-13
  • 0
    @Didier In other words, that's a typo. The "1" and the "2" got switched.2011-11-13
  • 0
    I've gone ahead and edited the question to fix the typo.2011-11-13
  • 0
    Here's an interesting point: This coincides exactly with the standard 50% confidence interval for a sample from a normal distribution (using Student's t-distribution in the usual way). Student's distribution with just one degree of freedom is actually the standard Cauchy distribution. I've seen this proposed as an example of how the precise definition of a confidence interval can be satisfied in cases where it would be absurd to be "50% confident" in any reasonable sense. I.e., the data themselves may contain evidence that....2011-11-13
  • 1
    icobes: keep in mind that there is a whole site, http://stats.stackexchange.com/, dedicated to statistics!2011-11-13
  • 1
    @Michael: No kidding? I never would have guessed.2011-11-13
  • 0
    ....the present case is almost certainly one of the 50% that do cover the population median, or almost certainly one of the 50% that do not. That happens when this interval is used as a 50% confidence interval for the median of the uniform distribution on the interval $(\theta-1/2,\theta+1/2)$. Fisher's technique of conditioning on an ancillary statistic handles the situation well; in that case the ancillary statistic is the distance between $X_1$ and $X_2$. Clearly it's because we had knowledge of the scale that that situation arises. So here's a theoretical question:2011-11-13
  • 0
    For which families of distributions is this confidence interval for the median good without any need for ancillary information? My guess: nondegenerate location-scale families. Possibly no others.2011-11-13
  • 0
    @Michael: Your typo correction generated another one, albeit, by linearity, perhaps less serious. :)2011-11-13
  • 0
    @cardinal Yes---it occurred to me that the expected value being asked for is negative, and the one I find in my answer is positive. But of course if you know one it's easy to find the other.2011-11-13

2 Answers 2

3

Let $X_1,X_2$ be the i.i.d. sample; then $Y_2 =\max\{X_1,X_2\}$ and $Y_1=\min\{X_1,X_2\}$ (I'm regarding "${<}$" as a typo in the question, in view of the result to be proved).

Then either $Y_1

The first happens if and only if both $X_1$ and $X_2$ are less than $\mu$; the second if and only if one (either one) is less than $\mu$ and the other greater; the third if and only if both are greater than $\mu$.

The probability that $X_1>\mu$ is $1/2$; similarly for $X_2$.

So the event $Y_1<\mu

Now notice that $E(Y_2-Y_1) = E(|X_2-X_1|)$, and $X_1-X_2 \sim \mathcal{N}(\mu-\mu,\sigma^2+\sigma^2)=\mathcal{N}(0,2\sigma^2)$. So $E(|X_2-X_1|)= \sqrt{2}\sigma E\left(\dfrac{|X_2-X_1|}{\sqrt{2}\sigma}\right)$ and $Z=\dfrac{X_2-X_1}{\sqrt{2}\sigma}\sim\mathcal{N}(0,1)$. So we want $\sqrt{2}\sigma E(|Z|)$.

So $$ \begin{align} E(|Z|) & = \int_{-\infty}^\infty |z| \varphi(z)\;dz = 2\int_0^\infty z \varphi(z)\;dz = 2\int_0^\infty z \frac{1}{\sqrt{2\pi}} e^{-z^2/2} \; dz \\ \\ & = \sqrt{\frac{2}{\pi}} \int_0^\infty ze^{-z^2/2} \; dz = \sqrt{\frac{2}{\pi}} \int_0^\infty e^{-u} \; du = \sqrt{\frac{2}{\pi}}. \end{align} $$

Multiplying that by $\sqrt{2}\;\sigma$, we get $\dfrac{2\sigma}{\sqrt{\pi}}$.

0

The hypothesis should be $Y_1 < Y_2$.

Now the mutual density is given by : $f_{Y_1,Y_2}(y_1,y_2) = 2 f_{X}(y_1) f_{X}(y_2)$ for any $y_1,y_2$ such that $y_1 < y_2$, and $f_{Y_1,Y_2}(y_1,y_2) = 0$ otherwise. $f_X$ is the density of $\mathcal{N}(\mu,\sigma)$.

So : $P(Y_1 < \mu < Y_2) = \int_{D} f_{Y_1,Y_2}(y_1,y_2)dy_1dy_2$ on the domain $D=\left\{y_1 < \mu < y_2 \right\}$. If I take a pen and paper, draw this domain, I find : $f_{Y_1,Y_2}(y_1,y_2) = 1 - \int_{-\infty}^{\mu}\int_\mu^{+\infty}2 f_{X}(y_1) f_{X}(y_2)dy_1dy_2 = 1-(2F_{X}(\mu))(1-F_{X}(\mu))=\frac12$ since $F_{X}(\mu) = F_{X}(\mu)=\frac12$.

E(Y_1-Y_2) = E(Y_1) - E(Y_2) = 0

edit Oups, I wrote this too fast! $Y_1$ and $Y_2$ don't have the same expectation.

  • 0
    Mmhhh the solution to the second part of the question is very wrong.2011-11-13
  • 0
    I'm surprised at how complicated you make this. The event in question is simply the event that the number of successes in two independent trials is $1$, i.e. exactly one of $X_1,X_2$ is less than $\mu$. And the probability of success on each trial is $1/2$.2011-11-13