Suppose we have $Y$, a random variable that takes values of 0 and 1 with probabilities $p$ and $1-p$, respectively. Then we let $A$ be a trivial estimator that is always equal to $1\over 2$. What is the MLE $\hat{p}$ of p?
Question about maximum likelihood estimator
-
0I do not understand *Then we let $A$ be a trivial estimator that is always equal to $\frac12$*. – 2012-03-08
1 Answers
Hints:
If you sample $Y$ $n$ times and get a $1$ $k$ times then the likelihood is proportional to $$p^{n-k}(1-p)^{k}.$$ So you want to find the $p$ which maximises this in general to have a maximum likelihood estimator.
The likelihood is a continuous function of $p$, non-negative for $0 \le p \le 1$, and differentiable, so one obvious approach is to set the derivative with respect to $p$ equal to zero: if $k$ and $n-k$ are each two or more then there will be three solutions, two of which do not maximise the likelihood while the third does. This is $$\hat{p}=\frac{n-k}{n} = 1-\overline{Y}.$$
If you only observe $Y$ once, the likelihood becomes $p^{1-Y}(1-p)^Y$ so you either get $(1-p)$ if $Y=1$, or $p$ if $Y=0$, both linear so maximised at an extreme. The maximum likelihood estimate is $\hat{p}=0$ if $Y=1$, and $\hat{p}=1$ if $Y=0$, i.e. the estimator is $$\hat{p} = 1-{Y}.$$
-
0i feel kind of strange. The ans of the likelihood function is some what $p^{1-Y}(1-p)^{Y}$ – 2012-03-08
-
0@Mathematics: I missed that $p$ was the probability of $0$ not $1$. I have edited the answer, and extended it for the case where you only observe $Y$ once. – 2012-03-08
-
0i dont quite understand how you get the likelihood function, from my knowledge of finding likelihood function, first ,i will write the pdf,then left along the parameter as the variable. But in this case,if Y is only take 1 value, i dk how to get the likehood function or pdf . – 2012-03-08
-
0@Mathematics: There is no pdf as Y is a discrete random variable. For a single observation of $Y$, it has a probability $p$ of being $0$ and a probability $1-p$ of being $1$. So you can take $p$ as the likelihood of $p$ given $Y=0$ (maximised when $p=1$) and $1-p$ as being the likelihood of $p$ given $Y=1$ (maximised when $p=0$). – 2012-03-08
-
0yup,i got your point,p is the likelihood when Y=o and 1-p when Y=1,but why there appear as the product of them if we only observe once?More over, i don't know whether i get it or wrong,is Y and 1-Y contain the same information about p? – 2012-03-08
-
0@Mathematics: $p^{1-Y}(1-p)^{Y}$ is equivalent to saying $p$ is the likelihood when $Y=0$ and $1-p$ when $Y=1$ providing that $Y$ can only take those values. $Y$ and $1-Y$ contain the same amount of information about $p$ as each other, but in a different form – 2012-03-08