1
$\begingroup$

Let $X_1, X_2, \ldots , X_n$ be a random sample from a distribution with the following pdf

$$f(x|\theta) = \begin{cases} 1/(\theta_2−\theta_1), &\quad\text {for}\quad \theta_1 \leq x\leq \theta_2\\ 0 &\quad\text { otherwise}\quad \end{cases}$$ Suppose that $\theta_1$ and $\theta_2$ are unknown.

How would I go about writing the likelihood function for this distribution on $\theta_1$ and $\theta_2$.

2 Answers 2

1

The easiest way might be to begin by writing the density as it should be written, that is, as $$ f(x\mid\theta)=\frac{\mathbf 1_{\theta_1\leqslant x\leqslant\theta_2}}{\theta_2-\theta_1}, $$ where $\theta=(\theta_1,\theta_2)$ with $\theta_1\lt\theta_2$. Then the likelihood of an i.i.d. sample $\mathbf x=(x_1,\ldots,x_n)$ is $$ f(\mathbf x\mid\theta)=\prod_{k=1}^nf(x_k\mid\theta)=\frac{\mathbf 1_{\theta_1\leqslant m_n(\mathbf x),s_n(\mathbf x)\leqslant\theta_2}}{(\theta_2-\theta_1)^n}, $$ where $$ m_n(\mathbf x)=\min\{x_k\mid 1\leqslant k\leqslant n\}, \qquad s_n(\mathbf x)=\max\{x_k\mid 1\leqslant k\leqslant n\}. $$ For every fixed $\mathbf x$, $f(\mathbf x\mid\theta)$ is maximal when $\theta_2-\theta_1$ is as small as possible hence the MLE for $\theta=(\theta_1,\theta_2)$ based on $\mathbf x$ is $$ \widehat\theta(\mathbf x)=(m_n(\mathbf x),s_n(\mathbf x)). $$

  • 0
    would the max be the MLE for θ2.2012-11-10
  • 0
    See Edit. $ $ $ $2012-11-10
  • 0
    I think I got you. Just to confirm in my notation, would it be then valid write that the MLE for $\theta_1$ is $\widehat\theta_1 = \max\{x_k\mid 1\leqslant k\leqslant n\}. $2012-11-10
  • 0
    In general, the MLE is an estimator of the full parameter of the distribution, here $\theta$. The notion of MLE for a sub-parameter such as $\theta_1$, is not well defined. What one can say though, is that the $\theta_1$ part of the MLE is the minimum (and certainly not the maximum) of the sample. An additional subtlety here is that for every fixed $\theta_2$, the same value of $\theta_1$ yields the maximum (restricted) likelihood.2012-11-10
  • 0
    I'm sorry for prolonging this. I copy and pasted the wrong bit of what you typed, as I was checking how do you type it the way you do; I meant min instead of max. So, MLE for $ \theta_1 $ is $ \widehat\theta_1 = \min\{x_k\mid 1\leqslant k\leqslant n\} $ On the next point, I don't think, $ \theta_2 = \theta_1 $2012-11-10
  • 0
    ... On the next point, I don't think $ \theta_2 = \theta_1 $ is valid, but I've probably misunderstood what you wrote so if you don't mind, just explain that bit again.2012-11-10
  • 0
    ?? How one can extract from what I wrote anything resembling the assertion that $\theta_2=\theta_1$, in any sense, is beyond me.2012-11-10
  • 0
    I'm confused about the following: "for every fixed $ \theta_2 $, the same value of $ \theta_1 $ yields the maximum (restricted) likelihood."2012-11-10
0

$$ L(\theta_1,\theta_2) = \begin{cases} \frac{1}{(\theta_2-\theta_1)^n} & \text{for }\theta_2\ge\max\text{ and }\theta_1 \le\min \\[10pt] 0 & \text{for other values of }(\theta_1,\theta_2) \end{cases} $$ where $\max=\max\{X_1,\ldots,X_n\}$ and $\min=\min\{X_1,\ldots,X_n\}$.

Draw the picture in the $(\theta_1,\theta_2)$ plane.

  • 0
    the graph would be decreasing? actually, what is the variable?2012-11-10
  • 0
    would the max be the MLE for θ22012-11-10
  • 0
    "what is the variable?" There are two: $\theta_1$ and $\theta_2$. $\max$ is the MLE for $\theta_2$ since $L$ is bigger when $\theta_2=\max$ than when $\theta_2>\max$.2012-11-10
  • 0
    Great, I thought so. But then wouldn't $\theta_2$ be smallest possible value which is = $\theta_1$2012-11-10
  • 0
    If $\min<\max$, then the MLEs for $\theta_1$ and $\theta_2$ are different.2012-11-10