5
$\begingroup$

this is my first post so I apologize if the formatting is a little rocky.

I'm currently going through "Probability and Statistics" 4th ed by DeGroot/Schervish, and I was wondering if somebody could help me out on two related problems (7.5.10, 7.6.1).

The first question is as follows: Suppose that $ X_1, \dots, X_n $ form a random sample from a distribution for which the p.d.f. $ f(x|\theta) $ is as follows: $ f(x|\theta) = \frac{1}{2} e^{-|x-\theta|} \text{ for } -\infty < x < \infty $ Also, suppose that the value of $ \theta $ is unknown, for $ -\infty < \theta < \infty $. We will find the M.L.E. of $ \theta $.

The likelihood function is given by $ f_n(\mathbf{x}|\theta) = \frac{1}{2^n} e^{-\sum_{i=1}^n |x_i - \theta|}$ and will be maximized when $ \sum_{i=1}^n |x_i - \theta| $ is minimized. By choosing $ \theta $ to be a median value of $ x_1, \dots, x_n $, we accomplish the minimization task.

More specifically, note that the likelihood function has log \begin{align*} \log f(\mathbf{x}|\theta) &= -n \log 2 - \sum_{i=1}^n |x_i - \theta| \\ &= n \left ( - \log 2 - \frac{1}{n} \sum_{i=1}^n |x_i - \theta| \right ) \end{align*} Now, we see that the M.L.E. will minimize the sum in the log likelihood shown above. We can also see that $ \frac{1}{n} \sum_{i=1}^n |x_i - \theta| = E(|X-\theta|)$ for $ X $ having a discrete distribution assigning probability $ \frac{1}{n} $ to each of $ x_1, \dots, x_n $, and 0 everywhere else. So now we can see that, by choosing $ \hat{\theta} $ to be a median of $ x_1, \dots, x_n $, then the log likelihood will be minimized. This follows from the fact that the median of a distribution minimizes the mean absolute error.

The second question is to find the MLE of $e^{-\frac{1}{\theta}}$, which by the invariance property of MLEs, should just be $e^{-\frac{1}{\hat{\theta}}}$.

My problem is that the answer in the back of the book (for the second question) is given as $\left ( \prod_{i=1}^n x_i \right)^{\frac{1}{n}}$, and I'm having trouble reconciling that with my answer. You'd think it'd be pretty straightforward, but...

Any help would be appreciated!

  • 0
    @cardinal Interesting! Thanks again for all your help :)2012-09-09

1 Answers 1

2

Here is a quick pointer on the optimization step. Your intuition for the problem seems good, it is just a matter of fleshing out the computations.

At this point, you want to maximize the likelihood of the data given $\theta$, \begin{align*} \log f(\mathbf{x}|\theta) &= -n \log 2 - \sum_{i=1}^n |x_i - \theta| \\ &= n \left ( - \log 2 - \frac{1}{n} \sum_{i=1}^n |x_i - \theta| \right ) \end{align*} , which is equivalent to minimizing the term $\frac{1}{n} \sum_{i=1}^n |x_i - \theta|$.

One can write this as an optimization problem:

$\arg\max_{\theta}{\{-n\log 2 - \sum_{i=1}^n |x_i - \theta|\}}$

Then let $ \epsilon_i = |x_i - \theta | \quad \forall i $

, so we can rewrite the optimization problem as $\arg\max_{\epsilon,\theta}{\{-n\log 2 - \sum_{i=1}^n \epsilon_i\}}$ subject to $ -\epsilon_i \leq x_i - \theta \leq \epsilon_i \quad \forall i$. The constraints simplify to:

\begin{align*} (x_i - \theta) + \epsilon_i &\geq 0 \\ -(x_i - \theta) + \epsilon_i &\geq 0 \end{align*}

Then, go on to set up the Lagrangian and compute the answer.