2
$\begingroup$

The following is a problem in my book that I don't really understand:

We take a random sample: $x_1,x_2,\ldots,x_n$ from a population that is $N(μ,σ)$ where $\mu$ and $\sigma$ are unknown.

We build two estimates:

$\mu^*_{\text{obs}} = \overline{x} = (x_1 + x_2 + \cdots + x_n)/n$

and

$\hat{\mu}^*_{\text{obs}} = (x_1+x_2)/2$

Show that both estimates are unbiased.

I know that an estimate of a sample mean is unbiased when we divide by $n-1$ instead of $n$. How come those two estimates are unbiased? In my eyes they are biased.

  • 0
    Regarding $n-1$, you are confusing with the sample variance.2012-12-13

2 Answers 2

3

It follows by the linearity of expectation: $ E[\mu^*_{\text{obs}}]=\frac{1}{n}\left(E[x_1]+\cdots+E[x_n]\right)=\frac{1}{n}\left(\mu+\cdots+\mu\right)=\frac{1}{n}n\mu=\mu $ and hence $\mu^*_{\text{obs}}$ is unbiased for $\mu$. The same applies for $\hat{\mu}^*_{\text{obs}}$, either by direct computation just as above, or by noting that is it in fact $\mu^*_{\text{obs}}$ based on a random sample of size $n=2$.

What you mention about dividing by $n-1$ instead of $n$ applies to the sample variance, i.e. $ s^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2 $ is unbiased for $\sigma^2$ (I take it $N(\mu,\sigma)$ means that $\sigma$ is the standard deviation).

1

Compute their expectation values: For each $x_i$ we have $E(x_i)=\mu$, and $E(x_i+x_j)=E(x_i)+E(x_j)$ (expectation value is linear)

Thus $E(\bar{x})=(\mu + ... + \mu)/n = n\mu/n = \mu$ and the first one is unbiased.

I don't how you concluded that "an estimate of a sample mean is unbiased when we divide by $n−1$ instead of $n$." Perhaps you were thinking of sample variance, $S^2=\frac{1}{n-1} \sum_{j=1}^n (x_j - \bar{x})^2$ which is an unbiased estimator for $\sigma^2$.

With this the second one should also be clear.