0
$\begingroup$

How to compute the distribution like $f(x:\theta)=\begin{cases} \exp{(-(x-\theta))}& {x>\theta}\\ 0& \text{others} \end{cases}$ or the distribution is like $f(x:\theta)=\begin{cases} \theta x^{-2}& {x>\theta}\\ 0& \text{others} \end{cases}$

When I get the likelihood ratio, it is degenerate and not related to sample X. I try to solve it like the uniform cases, however, in uniform cases, I can make a transpose so that the parameter $\theta_1$ and $\theta_0$, so that there are no $\theta$ in final expression, but it failed when I deal with these two cases.

  • 0
    What hypotheses are you testing?2017-02-24
  • 0
    Oh sorry, I forget to put it. It is the simple hypotheses that $H_0=\theta_0$,$H_1:\theta=\theta_1$,$\theta_1>\theta_0$2017-02-25

1 Answers 1

1

I see none principle differences with $U(0,\theta)$ here.

Consider shifted exponential case.

$$ f(x:\theta)=\begin{cases} \exp{(\theta-x)},& {x>\theta}\cr 0,& x \leq \theta\end{cases}$$ Likelihood function is equal to $$ f(X_1,\ldots,X_n:\theta)=\begin{cases} \exp{(n\theta-n\overline X)},& {X_{(1)}>\theta}\cr 0,& {X_{(1)}\leq\theta}\end{cases}$$ For $\theta_1>\theta_0$ the likelihood ratio is the following: $$ \dfrac{f(X_1,\ldots,X_n:\theta_1)}{f(X_1,\ldots,X_n:\theta_0)}= \begin{cases} \exp{(n\theta_1-n\theta_0)},& {X_{(1)}>\theta_1}\cr 0,& {\theta_0

We can construct the MPT $\phi$ with probability of type-I error (size of test) $$\alpha=P_{\theta_0}(\phi=1)\leq P_{\theta_0}\bigl(\,f(X_1,\ldots,X_n:\theta_1)>0\bigr)=P_{\theta_0}(X_{(1)}>\theta_1)=$$ $$=\left(P_{\theta_0}(X_1>\theta_1)\right)^n=\exp(n(\theta_0-\theta_1))\to 0\text{ as } n\to\infty.$$

It means that we should accept $H_0$ when $\theta_0

This bound appears since for any $\alpha>\exp(n(\theta_0-\theta_1))$ there exists a test with smaller size but with power $1$. Indeed, the MPT with size $\alpha=\exp(n(\theta_0-\theta_1))$ looks as follows: $$ \phi(X_{(1)})=\begin{cases}1,& X_{(1)}>\theta_1\cr 0, & \theta_0\theta_1)=1.$$

Next, there are two equivalent ways to construct the MPT with $\alpha<\exp(n(\theta_0-\theta_1))$.

1st way randomized test: $$ \phi(X_{(1)})=\begin{cases}p,& X_{(1)}>\theta_1\cr 0, & \theta_0\theta_1) = p\exp(n(\theta_0-\theta_1)). $$ So, $p=\alpha \exp(n(\theta_1-\theta_0))\leq 1$. Remind that $n$ is a fixed number. The power of this test is equal to $$\beta=pP_{\theta_1}(X_{(1)}>\theta_1)=p=\alpha \exp(n(\theta_1-\theta_0)). $$

2nd way (The Karlin–Rubin theorem) $$ \phi(X_{(1)})=\begin{cases}1,& X_{(1)}>\theta_1+c\cr 0, & \theta_00$. Find $c$: $$ \alpha=P_{\theta_0}(X_{(1)}>\theta_1+c) = \exp(n(\theta_0-\theta_1-c))=\exp(-nc)\exp(n(\theta_0-\theta_1)). $$ We obtain $\exp(-nc)=\alpha \exp(n(\theta_1-\theta_0))=p$, where $p$ is the randomizing probability from the "first way".

Calculate the power $$ \beta=P_{\theta_1}(X_{(1)}>\theta_1+c)=\exp(-nc)=\alpha \exp(n(\theta_1-\theta_0)) $$ The powers of tests obtained by both ways are the same. Both tests are MPT.

The second case with Pareto-type distribution can be solved similar way.