Suppose that $Y_1, Y_2,\ldots, Y_n$ are independent $N(\alpha,σ^2)$ Show that, if $\sigma$ is unknown, the likelihood ratio statistic for testing a value of $\alpha$ is given by $D = n \log\left(1 + \frac{1}{n-1}T^2\right)\;,$ where $T = \frac{\hat{α} -\alpha}{\sqrt{s^2/n}}$
Statistics (Likelihood ratio stat.)
-
1Glad you got it, max. Could you "accept" my answer? (I.e. click on the green check-mark.) – 2012-03-13
1 Answers
OK, a moderately thin sketch. It's useful to know this algebraic identity: $ \sum_{i=1}^n (x_i - \alpha)^2 = n(\overline{x} - \alpha)^2 + \sum_{i=1}^n (x_i-\overline{x})^2, \tag{1} $ where $\overline{x}= (x_1+\cdots+x_n)/n$. That means if you seek the value of $\alpha$ that minimizes this, you only need to look at the first term on the right above. That value of $\alpha$ maximizes the likelihood function $L=L(\alpha,\sigma)$, since $L$ depends on $\alpha$ only through the sum (1), and $L$ is a decreasing function of that sum. Since the value $\hat\alpha$ of $\alpha$ that maximizes $L$ does not depend on $\sigma$, you can then just plug in that value to $L$, getting $L(\hat\alpha,\sigma)$, and find the value of $\sigma$ that maximizes that. You will see that that makes one of the two terms in (1) vanish. You'll end up with $ L = \frac {1}{\sigma^n} \exp\left( \frac{-(\text{something})}{2\sigma^2} \right). $ To find the value $\hat\sigma$ of $\sigma$ that maximizes that, just realize that since $\ln$ is an increasing function, it's the same as the value that maximizes $ \ell = \ln L = -n\ln\sigma - \frac{\text{something}}{2\sigma^2}. $ Once you've got $\hat\alpha$ and $\hat\sigma$, you need the likelihood ratio, either $ D = \frac{L(\hat\alpha,\hat\sigma)}{L(\alpha_0,\hat\sigma)}, $ or $ D = \frac{L(\alpha_0,\hat\sigma)}{L(\hat\alpha,\hat\sigma)} $ (depending on which way you want to do it), where $\alpha_0$ is the value of $\alpha$ given by the null hypothesis.
What you get should be a monotone function of $T^2$, so you reject the null hypothesis if $T^2$ is too big.
I'll leave it to you to work through details and bring up further questions about those if necessary.
-
0Maybe I should add that in the expression $L(\alpha_0,\hat\sigma)$, the $\hat\sigma$ is not the same as the $\hat\sigma$ that appears in $L(\hat\alpha,\hat\sigma)$, since it's the value that maximizes $L$ with $\alpha$ fixed at $\alpha_0$. You get $(1/n)\sum_{i=1}^n (x_i-\alpha_0)^2$, whereas in the other one you get $(1/n)\sum_{i=1}^n (x_i-\overline{x})^2$. – 2012-03-15