1
$\begingroup$

Let $(\mathfrak{X}_n, (P_\vartheta)_{\vartheta \in \Theta})$ be a statistical model with $n$ samples. Then $\hat{\vartheta}_n: \mathfrak{X}_n \rightarrow \tilde{\Theta}$ is called an estimator for $\vartheta$.

  • $\hat{\vartheta}_n$ is called unbiased, if $\mathbb{E}(\hat{\vartheta}_n) = \vartheta$.
  • $\hat{\vartheta}_n$ is called concistent, if $\lim_{n \rightarrow \infty} P(|\hat{\vartheta}_n - \vartheta| > \varepsilon) = 0$ for $\varepsilon > 0$.

There are estimators which are both unbiased and consistent:

Let $X_1, \dots, X_n \stackrel{iid}{\sim} Bin(1, \vartheta)$ with $\vartheta \in (0, 1)$. Then $\hat{\vartheta}_n = \frac{1}{n} \sum_{i=1}^n x_i$ is unbiased and consistent.

There are estimators which are neither unbiased nor consistent. The estimator $\hat{\vartheta} = 0.5$ for the setting from before (if $\vartheta \neq 0.5$).

But are there unbiased estimators which are not consistent? Are there consistent estimators which are not unbiased?

2 Answers 2

3

Yes to both. For the sequence of random variables you give:

$\hat{\vartheta}^\prime_n =x_n$ is unbiased but clearly not consistent. $\hat{\vartheta}^{\prime\prime}_n =\frac1n\sum_{i=1}^n x_i + \frac1n$ is biased but consistent.

0

Just to generalize a little the answer above:

  1. Any estimator that does not depend on the whole sample size $n$ is inconsistent. By "depend" I mean that the used number of "observations" to construct the estimator do not increase proportionally with the increase of $n$. In your case of $Bin(n, \theta)$, any estimator of a a kind $$ \hat{\theta}_k = \frac 1k \sum_{i=1}^kx_i, \quad k is unbiased and inconsistent (its variance $\theta(1-\theta)/k > 0$, even when $n\to \infty$).

  2. Unibiasdness for finite $n$ is not that important. A necessarily (though not sufficient) condition for consistent estimator is asymptotic unbiasdness, i.e., $$ \lim_{n\to \infty}\mathbb{E}\hat{\theta}_n=\theta. $$
    And (when the estimator has second finite moment) it comes with $$ \lim_{n\to \infty}\mathbb{E}(\hat{\theta}_n - \mathbb{E}\theta)^2 = 0, $$ then $\hat{\theta}_n \xrightarrow{p} \theta$ as $n\to \infty$. For instance, in the class of maximum likelihood estimators, it is pretty common to encounter biased estimators. This property is mostly neglectable as the bias converges to $0$ for $n \to \infty$.

  • 0
    "Any estimator that does not depend on the whole sample size $n$ is inconsistent" seems strange to me. Lets say I only ignore the first sample, but take all other samples into account ($\hat{\vartheta} = \frac{1}{n-1} \sum_{i=2}^n x_i$ for large $n$). Why should that make any difference?2017-02-18
  • 0
    What I meant is that $k$ is constant and does not changes with $n$. Even if you ignore $90\%$ of your sample is OK, i.e., if $k=0.1n$ it will still be consistent.2017-02-18