We measure four times in a row mercury in the same food sample. Every time there is a small measurement error the four measurements give:
$X_1 = 13, X_2 = 7, X_3 = 10, X_4=10$
Test on a 95%-significance level that there would be more than 14 units of Mercury in the sample. Assuming the standard deviation of the measurement error being given and equal to 2.
Answer:
Let $\mu$ designate the true level of Mercury. The assumption is that $X_i = \mu +\alpha_i$, where $\alpha_i$ is the $i^{th}$ measurement error. We assume it to be normal and have 0 expectation, which means $E[\alpha_i] = 0$ which implies
$E[X_i] = E[\mu + \alpha_i] = \mu + E[\alpha_i] = \mu$
The sample mean is our estimate for μ. So:
$\hat \mu = \frac{X_1 + ... + X_4}{4} = 10$ This is less than the hypothesis would be saying, so let us see the probability of this when have the hypothesis
$P_{\mu = 14}(\frac{X_1 + ... + X_4}{4} \le 10) = P(\frac{X_1 + ... + X_4}{4}-14 \le -4)$ Now let $Z = (\frac{X_1 + ... + X_4}{4})$, then $E[Z] = E[X_i] = \mu$. Also, $\frac{\sigma x_i}{2}=1$. Thus, $P(N(0,1) \le -4)$ which is approxiametly close to zero, which means we can reject the hypothesis.
My question is how were they able to find $\frac{\sigma x_i}{2}=1$? Also, is there a simpler way to do this?