0
$\begingroup$

Suppose we have some data $\{data\}$ arising from one distribution described by an unknown set of parameters $X_0$ that we want to estimate. The maximum likelihood procedure (under a uniform prior) provides us with the following estimator:

$\hat{MLE} = argmax_Y \ \ L(\{data\}| Y)$

which we can regard as a random variable, given that the data are random. I try to indicate random variables with an hat. Also the likelihood itself can be regarded as a random variable (parametrically dependent on Y, if we want: a random density), $\hat{L}(\{data\}|Y)$ and it seems possible to define as well:

$\hat{\mu}=\int \ Y \hat{L}(\{data\}|Y) \ dY$

and

$\hat{\sigma^2}=\int \ (Y -\hat{\mu})^2 \ \hat{L}(\{data\}|Y) \ dY$

How are these random variables related? Do we have to expect that generally (maybe not always...):

  1. $E[\hat{MLE}] = X_0$ ?
  2. $E[\hat{\sigma^2}] = \sigma^2[\hat{MLE}]$ ?
  3. $E[\hat{\mu}] = X_0$ ?

I know that point (1) is not always true (as the mle estimator for the variance has a factor of 1/N instead of 1/(N-1) ), but the others? And "how much" point (1) is false? I guess something similar must be true...

  • 1
    The answer is distribution dependent. You can see that $\hat{MLE}$ is the mode of $Y$, and $\hat\mu$ is the mean of $Y$. So, basically, the difference between them is the general difference between mode and mean.2017-01-11
  • 0
    Thanks a lot for noticing. I understand that $\hat{MLE}$ and $\hat{\mu}$ are exactly those random variables and therefore are different. It remains true that the can have the same expectation value? Any comment about point (1) by itself or point (2) ?2017-01-11

0 Answers 0