5
$\begingroup$

Let's say I have two independent random samples $X_1, X_2, \dots, X_n$ and $Y_1, Y_2, \dots, Y_n$ from normal distributions with real, unknown means $\mu_x$ and $\mu_y$ and known standard deviations $\sigma_x$ and $\sigma_y$.

How would I go about deriving a $100(1 - \alpha)$% confidence interval for $\mu_x - \mu_y$? This is straight forward (in my mind) assuming the standard deviations are equal, but what if they are unequal?

  • 0
    I see: He did say they're known.2012-08-16

2 Answers 2

5

Alright, you say known variances. So it's an exercise on a point of theory, not a realistic problem.

And you actually assume the two sample sizes are equal.

Start by recalling something from the one-sample problem: $ \bar{X} = \frac{X_1+\cdots+X_n}{n} \sim N\left(\mu_x,\frac{\sigma^2_x}{n}\right) $ $ \bar{Y} = \frac{Y_1+\cdots+Y_n}{n} \sim N\left(\mu_y,\frac{\sigma^2_y}{n}\right) $ You don't explicitly state that the two samples are independent. If they are, they we have $ \bar X - \bar Y \sim N\left(\mu_x-\mu_y,\frac{\sigma^2_x+\sigma^2_y}{n}\right) $ (If we had unequal sample sizes $n$ and $m$, then the variance would be $\dfrac{\sigma^2_x}{n}+\dfrac{\sigma^2_y}{m}$.)

So $ \frac{((\bar X-\mu_x) - (\bar Y-\mu_y))\sqrt{n}}{\sqrt{\sigma^2_x+\sigma^2_y}} \sim N(0,1). $ So the probability that $ -A < \frac{(\bar X-\mu_x) - (\bar Y-\mu_y)}{\sqrt{ \frac{\sigma^2_x+\sigma^2_y}{n} }} is the desired confidence when the number $A$ is suitably chosen. Now do a bit of algebra to rearrange the inequalities $(1)$: $ \bar X - \bar Y - A\sqrt{\frac{\sigma^2_x+\sigma^2_y}{n}} < \mu_x-\mu_y < \bar X - \bar Y + A\sqrt{\frac{\sigma^2_x+\sigma^2_y}{n}} $ That's the confidence interval.

  • 0
    This was here for a few minutes without the factor or $A$ in two places in the last line. Now I hope it's correct.2012-08-16