suppose $X_1,X_2\ldots,X_n$ be a random sample of distribution with probability density function $$f(\theta, x) = \begin{cases} \theta &\text{if } x=-1 \\ (1-\theta)^2 \theta^x & \text{if } x=0,1,2,\ldots. \end{cases}$$ if $r_n$ be the ratio members of sample are equal to $(-1)$, how can i find MLE of parameter $\theta$
finding MLE of parameter $\theta$
-
0the likelihood is $\prod_{X_i \ne -1}(1-\theta)^2\theta^{X_i}\prod_{X_i = -1}\theta $ and its really not so bad – 2012-04-26
2 Answers
$$P(X_1,\ldots, X_n|\theta) = \theta^{\#[X = -1]} (1 - \theta)^{2n(1 - r_n)} \theta^{\sum_{i \geq 0} \#[X = i]i} = \theta^{2\#[X = -1]} (1 - \theta)^{2n(1 - r_n)} \theta^{\sum_{i \geq -1} \#[X = i]i}$$ Therefore, $$P(X_1,\ldots, X_n|\theta) = \theta^{2nr_n} (1 - \theta)^{2n(1 - r_n)} \theta^{n\bar{X}} = (1 - \theta)^{2n(1 - r_n)} \theta^{2nr_n + n\bar{X}}$$ and $$\log P(X_1,\ldots, X_n|\theta) = 2n(1 - r_n)\log(1 - \theta) + (2nr_n + n\bar{X})\log(\theta)$$
Now, maximize this w.r.t $\theta$ subject to the usual constraints of probabilities between 0 and 1 and PDF summing to 1.
-
0Sorry but this formula for the likelihood is wrong and maximizing it will **not** yield the MLE. – 2012-04-29
-
0@Didier Feel free to point out the error. – 2012-04-29
-
0Feel free to read my answer. – 2012-05-03
-
0@Didier How is my answer different from yours? – 2012-05-03
-
0Hint: check the powers of $(1-\theta)$. – 2012-05-03
-
0@Didier Ack! Good catch, I think I fixed it now. – 2012-05-03
-
0Right. Next, I wonder what is the *constraint of PDF summing to 1* in the present case since, for every theta, this constraint is met. – 2012-05-03
-
0@Didier Yes, that constraint is unnecessary, I just wanted the OP to realize that for himself (I mean once somebody writes the constraint out, it is immediately noticeable). P.S. BTW, I am assuming when you said every theta, you meant every theta in (0,1) – 2012-05-03
Hint: Call $n$ the size of the sample, $s$ its sum, and $z$ the number of $-1$s in the sample. Then the likelihood $L(\theta)$ of the sample is a function of $(\theta,n,z,s)$. Solving the equation $L'(\hat\theta)=0$ yields the MLE $\hat\theta$ as a (rational) function of $(n,z,s)$, or, if one prefers, as a (rational) function of $(r,m)$ with $r=z/n$ the proportion of $-1$ and $m=s/n$ the mean of the sample.