10
$\begingroup$

I know that the Rao-Blackwell theorem states that an unbiased estimator given a sufficient statistic will yield the best unbiased estimator. Is the only difference between Lehmann-Scheffé and Rao-Blackwell that in Lehmann-Scheffé, you need an unbiased estimator that is based on a complete sufficient statistic? I am also having a hard time conceptually understanding the definition of a complete statistic.

1 Answers 1

16

Rao–Blackwell says the conditional expected value of an unbiased estimator given a sufficient statistic is another unbiased estimator that's at least as good. (I seem to recall that you can drop the assumption of unbiasedness and all you lose is the conclusion of unbiasedness; you still improve the estimator. So you can apply it to MLEs and other possibly biased estimators.) In examples that are commonly exhibited, the Rao–Blackwell estimator is immensely better than the estimator that you start with. That's because you usually start with something really crude, because it's easy to find, and you know that the Rao–Blackwell estimator will be pretty good no matter how crude the thing you start with is.

The Lehmann–Scheffé theorem has an additional hypothesis that the sufficient statistic is complete, i.e. it admits no unbiased estimators of zero. It also has an additional conclusion: the estimator you get is the unique best unbiased estimator.

So if an estimator is complete, unbiased, and sufficient, then it's the best possible unbiased estimator. Lehmann–Scheffé gives you that conclusion, but Rao–Blackwell does not. So the statement in the question about what Rao–Blackwell says is incorrect.

It should also be remembered that in some cases it's far better to use a biased estimator than an unbiased estimator.

  • 5
    Concerning my last comment above, that it's sometimes far better to use biased than unbiased estimators: This seems not to be too widely known among non-statisticians. I wrote a paper about it, devoted largely to my own concrete example of that phenomenon: http://www.math.umn.edu/~hardy/An_Illuminating_Counterexample.pdf2011-10-03
  • 0
    Thanks for the comment and answer. It cleared up some conceptual issues I had.2011-10-03
  • 0
    @Michael: (+1) Thanks for that link to your note. It's an interesting example. I thought it curious that you opted to constrast the unbiased estimator of the variance with the MLE, when in the particular case you chose, the optimal choice would be to use a denominator of $n+1$. Also, what motivated the "light source" problem originally?2011-10-05
  • 0
    Actually I thought only that MLEs are sometimes biased, so I mentioned them as an example. In cases often cited as examples, the MLE is usually a sufficient statistic, so in that sense it's not a good example (but in some cases the MLE is not sufficient and one could apply Rao-Blackwell). I don't remember exactly what led me to the light-source problem, but it was while taking a graduate course that covered some related matters that I came up with that.2011-10-05
  • 0
    The link above no longer works. Here's one that does: https://arxiv.org/abs/math/02060062018-03-03