0
$\begingroup$

Suppose that $Y_1, Y_2,\ldots, Y_n$ are independent $N(\alpha,σ^2)$ Show that, if $\sigma$ is unknown, the likelihood ratio statistic for testing a value of $\alpha$ is given by $$D = n \log\left(1 + \frac{1}{n-1}T^2\right)\;,$$ where $$T = \frac{\hat{α} -\alpha}{\sqrt{s^2/n}}$$

  • 0
    You haven't told us enough. Presumably the two parameters $\sigma$ and $\alpha$ should index a family of probability distributions. What family of distributions are you applying this to?2012-03-11
  • 0
    Please check to see that this is what you intended: there were some mismatched parentheses in the original, and the intent wasn’t entirely clear.2012-03-11
  • 0
    I think the recent edits make it clear. You're asking why the standard Student's t-test is a likelihood-ratio test. The way it's expressed here, with the test statistic depending only on $T^2$, you'd have to have a simple one-point null hypothesis and a two-sided alternative hypothesis. If I were presenting this to a class of students who know how to do only what they've been told how to do, I wouldn't give it as an exercise, but I might do it in class. For the kind who can be given the relevant definitions and then figure things out, it's not a bad exercise.2012-03-11
  • 0
    ....and I've done it in front of classes a couple of times.2012-03-11
  • 0
    I up-voted this question after the necessary clarifications were done. But still at this time the vote total is $-1$. Is there something objectionable about the question?2012-03-11
  • 1
    Glad you got it, max. Could you "accept" my answer? (I.e. click on the green check-mark.)2012-03-13

1 Answers 1

3

OK, a moderately thin sketch. It's useful to know this algebraic identity: $$ \sum_{i=1}^n (x_i - \alpha)^2 = n(\overline{x} - \alpha)^2 + \sum_{i=1}^n (x_i-\overline{x})^2, \tag{1} $$ where $\overline{x}= (x_1+\cdots+x_n)/n$. That means if you seek the value of $\alpha$ that minimizes this, you only need to look at the first term on the right above. That value of $\alpha$ maximizes the likelihood function $L=L(\alpha,\sigma)$, since $L$ depends on $\alpha$ only through the sum (1), and $L$ is a decreasing function of that sum. Since the value $\hat\alpha$ of $\alpha$ that maximizes $L$ does not depend on $\sigma$, you can then just plug in that value to $L$, getting $L(\hat\alpha,\sigma)$, and find the value of $\sigma$ that maximizes that. You will see that that makes one of the two terms in (1) vanish. You'll end up with $$ L = \frac {1}{\sigma^n} \exp\left( \frac{-(\text{something})}{2\sigma^2} \right). $$ To find the value $\hat\sigma$ of $\sigma$ that maximizes that, just realize that since $\ln$ is an increasing function, it's the same as the value that maximizes $$ \ell = \ln L = -n\ln\sigma - \frac{\text{something}}{2\sigma^2}. $$ Once you've got $\hat\alpha$ and $\hat\sigma$, you need the likelihood ratio, either $$ D = \frac{L(\hat\alpha,\hat\sigma)}{L(\alpha_0,\hat\sigma)}, $$ or $$ D = \frac{L(\alpha_0,\hat\sigma)}{L(\hat\alpha,\hat\sigma)} $$ (depending on which way you want to do it), where $\alpha_0$ is the value of $\alpha$ given by the null hypothesis.

What you get should be a monotone function of $T^2$, so you reject the null hypothesis if $T^2$ is too big.

I'll leave it to you to work through details and bring up further questions about those if necessary.

  • 0
    Maybe I should add that in the expression $L(\alpha_0,\hat\sigma)$, the $\hat\sigma$ is not the same as the $\hat\sigma$ that appears in $L(\hat\alpha,\hat\sigma)$, since it's the value that maximizes $L$ with $\alpha$ fixed at $\alpha_0$. You get $(1/n)\sum_{i=1}^n (x_i-\alpha_0)^2$, whereas in the other one you get $(1/n)\sum_{i=1}^n (x_i-\overline{x})^2$.2012-03-15