Let's say I have two independent random samples $X_1, X_2, \dots, X_n$ and $Y_1, Y_2, \dots, Y_n$ from normal distributions with real, unknown means $\mu_x$ and $\mu_y$ and known standard deviations $\sigma_x$ and $\sigma_y$.
How would I go about deriving a $100(1 - \alpha)$% confidence interval for $\mu_x - \mu_y$? This is straight forward (in my mind) assuming the standard deviations are equal, but what if they are unequal?