2
$\begingroup$

For iid random variables from a distribution with p.d.f.

$f(x;\theta_1,\theta_2)=\frac{1}{\theta_2}\exp\bigg(-\frac{(x-\theta_1)}{\theta_2}\bigg), \quad x>\theta_1, \quad(\theta_1,\theta_2)\in\mathbb{R}\times\mathbb{R}^{+}$ how can we find maximum likelihood estimators for $\theta_1$ and $\theta_2$?

I don't think finding the log-likelihood and performing partial deffierentiation will help for determining the MLE of $\theta_1$ because of the $x>\theta_1$ condition.

Any help would be greatly appreciated. Regards, MM.

  • 0
    Yes, you are thinking along the right lines. Michael Hardy has written this up in detail in the answer that you have accepted.2012-01-26

1 Answers 1

3

$ \ell(\theta_1,\theta_2) = \log L(\theta_1,\theta_2) = -n\log \theta_2 - \sum_{k=1}^n \frac{x_k-\theta_1}{\theta_2}\text{ for }\theta_1 < \min\{x_k : k=1,\ldots,n\}. $ The condition $x>\theta_1$ becomes $x_k>\theta_1\text{ for all }k$, and then as a condition on $\theta_1$ becomes $\theta_1<\min \{x_k : k=1,\ldots,x_n\}$.

The log-likelihood $\ell(\theta_1,\theta_2)$ increases as $\theta_1$ increases from $-\infty$ to $\min \{x_k : k=1,\ldots,x_n\}$. The maximum value of $\ell$ therefore occurs when $\theta_1 = \min \{x_k : k=1,\ldots,x_n\}$, regardless of the value of $\theta_2$.

Because this doesn't depend on the value of $\theta_2$, one can find the value of the pair $(\theta_1,\theta_2)$ that maximizes $\ell(\theta_1,\theta_2)$ just by plugging in this maximizing value $\min \{x_k : k=1,\ldots,x_n\}$ in place of $\theta_1$, getting $\ell(\min \{x_k : k=1,\ldots,x_n\},\theta_2)$ and then finding the value of $\theta_2$ that maximizes that.

So the MLE for $\theta_1$ is $\hat\theta_1 = \min \{x_k : k=1,\ldots,x_n\}$.

  • 0
    Thanks! I'd figured out it was the min, but you've explained it in great detail here.2012-01-26