3
$\begingroup$

I have some troubles with understanding of this explanation taken from wikipedia:

"An estimator can be unbiased but not consistent. For example, for an iid sample $\{x _1,..., x_n\}$ one can use $T(X) = x_1$ as the estimator of the mean $E[x]$. This estimator is obviously unbiased, and obviously inconsistent."

Why unbiased, and why inconsistent ...

Can someone explain it in more details?

  • 0
    IMO the statement would be more clear if writen "This estimator is obviously unbiased, but obviously inconsistent."2013-01-17

2 Answers 2

4

Suppose your sample was drawn from a distribution with mean $\mu$ and variance $\sigma^2$. Your estimator $\tilde{x}=x_1$ is unbiased as $\mathbb{E}(\tilde{x})=\mathbb{E}(x_1)=\mu$ implies the expected value of the estimator equals the population mean. Your estimator is on the other hand inconsistent, since $\tilde{x}$ is fixed at $x_1$ and will not change with the changing sample size, i.e. will not converge in probability to $\mu$.

Perhaps an easier example would be the following. Let $\beta_n$ be an estimator of the parameter $\beta$. Suppose $\beta_n$ is both unbiased and consistent. Now let $\mu$ be distributed uniformly in $[-10,10]$. Consider the estimator $\alpha_n=\beta_n+\mu$. This estimator will be unbiased since $\mathbb{E}(\mu)=0$ but inconsistent since $\alpha_n\rightarrow^{\mathbb{P}} \beta + \mu$ and $\mu$ is a RV.

  • 0
    I may ask a trivial Q, but that's what led me to this Q&A here: why is expected value of a known sample still equals to an expected value of the whole population? Intuitively I'd expect expected value of a known sample be equal to itself, e.g. $\mathbb{E}(0) = 0$2018-10-09
3

My answer is a bit more informal, but maybe it helps to think more explicitly about the distribution of $x_1$ over repeated samples, with mean $\mu$ and variance, say, $\sigma^2$. $x_1$ is an unbiased estimator for the mean: $\mathrm{E}\left(x_1\right) = \mu$. Roughly speaking, for consistency you would, in addition, need the variance of your estimator to go to zero as the sample size increases. But this doesn't happen here. Even asymptotically, $x_1$ will have a distribution with a non-zero variance, i.e. the distribution doesn't collapse into a single point. Intuitively, no matter how much your sample grows, no additional information is being used to estimate the population mean ($x_1$ still has variance $\sigma^2$ even as $n$ goes to infinity).