0
$\begingroup$

In the middle of studying Maximization Likelihood Estimation, I have faced another theory called Cramer's Low Bound. But I can't really see the relation between Cramer's Low Bound and MLE. I'll appreciate if somebody can give me a non-mathematical explanation for the question.

1 Answers 1

1

The Cramer-Rao lower bound gives a lower bound on the variance of estimators, i.e. it limits how precise estimators can be. The MLE is an estimator, so the natural question is this: Is the MLE the most precise estimator? Under some conditions, the answer is yes, you cannot do better than the MLE, as the MLE attains the Cramer-Rao lower bound, and thus no other estimator can be more precise.

To be more precise, here is this theorem in the lectures notes of A. vd Vaart, theorem 4.21:

For each $\theta$ in an open subset of Euclidean space, let $x \mapsto p_\theta(x)$ be continuously differentiable for every $x$ and such that, for every $\theta_1$ and $\theta_2$ in a neighbourhood of $\theta_0$, $$|\log p_{\theta_1}(x) - \log p_{\theta_2}(x)| \leq \dot{l}(x)||\theta_1 - \theta_2||$$ for a measurable function $\dot{l}$ such that $\mathbb{E}_{\theta_0} \dot{l}^2 < \infty$. Assume that the information matrix $I_\theta = \mathbb{E} \dot{l}\dot{l}^T$ is continuous in $\theta$ and non-singular. Then the maximum likelyhood estimator $\hat{theta}_n$ based on a sample of size $n$ from $p_{\theta_0}$ satisfies that $\sqrt{n}(\hat{\theta}_n - \theta_0)$ is asymptotically normal with mean zero and covariance matrix $I_{\theta_0}^{-1}$ provided that $\hat{\theta}_n$ is consistent.

This result is usually summarised as follows: under regularity conditions the MLE is an optimal estimator.

  • 0
    what's the $<$ at the end of your displayed math?2017-01-15
  • 0
    It was a typo, it is fixed now.2017-01-15