0
$\begingroup$

A random sample of size n is obtained from a population with a two-parameter probability density function given by:

$$f(x;\alpha, c) = \frac{\alpha c^\alpha}{x^{\alpha + 1}} \text{ for } x > c > 0 \text{ and } \alpha > 0$$

Prove that a likelihood ratio test for $H_0: \alpha = \alpha_0$ against $H_1: \alpha \not = \alpha_0$ can be based on the test statistic:

$$W = \frac{1}{n} ln(\prod_{i=1}^n (x_i / X_{(1)})) = \frac{1}{n} \sum_{i=1}^n ln (x_i/X_{(1)}) = \frac{1}{n} \sum_{i=1}^n ln(X_i) - X_{(1)}$$

$X_{(1)}$ is the first sample value when data arranged in ascending order

$$\lambda = \frac{L(\hat \omega)}{L(\hat Ω )}$$

Background: I am used to cases with known distributions such as Normal where at the top you use the value from $H_0$ given and at the bottom if we were testing $\mu$ for example we could replace that with $\over x$ for example or $\sigma^2$ we could replace with sample variance so what do we do with c in this case? I am not really sure what to put for $L(\hat Ω )$ in the denominator and how to get the t-statistic given

  • 0
    In the expression for $W$, the final term should probably be $\ln(X_{(1)})$ rather than $X_{(1)}$. Here $X_{(1)}$ is the maximum likelihood estimate of $c$2017-01-07
  • 0
    yeah I also thought that because $\frac{1}{n} \sum \ln X_{(1)} = \frac{1}{n} \times n \ln X_{(1)}$ = $\ln X_{(1)}$ right? But this comes out of a past exam paper so maybe it was a mistake that was corrected verbally during the exam?2017-01-07
  • 0
    Ah yes so we find the maximum likelihood estimator for c because we need to maximize it right?2017-01-07
  • 1
    To start, look at Wikipedia on 'Parato distribution', Sec 7.2017-01-07
  • 0
    @BruceET Ah okay I see thanks. So I found that the estimator of $\alpha$, i.e. $\hat \alpha _{MLE} = \frac{1}{\frac{1}{n}\sum_{i=1}^n (\ln X_i - X_{(1)})}$. But I must still put this in to show that the likelihood ratio test can be based on it right2017-01-08
  • 0
    $L(\hat \omega) = \prod _{i=1} ^n \frac{\alpha_0 (X_{(1)})^{\alpha_0}}{x_i^{\alpha_0 + 1}}$ and $L(\hat Ω) = \prod _{i=1} ^n \frac{\hat \alpha_{MLE} (X_{(1)})^{\hat \alpha_{MLE}}}{X_i^{\hat \alpha_{MLE} + 1}}$ where $\lambda = \frac{L(\hat \omega)}{L(\hat Ω)}$2017-01-08
  • 0
    @BruceET This is what I was thinking? Where I would substitute the $\hat \alpha_{MLE}$ with the estimate I found but just leave $\alpha_0$ as if it were a constant. Is this the right idea?2017-01-08

0 Answers 0