3
$\begingroup$

A while back I've asked a question on the relationship of the total variation distance between probability measures to hypothesis testing and got a very nice answer. I understand that that answer gives a trade-off relationship between the probability of type I error (false positive) $\alpha$ and a type II error (miss) $\beta$, similar to what Neyman-Pearson lemma provides.

Within the Neyman-Pearson framework, one can set $\alpha$ arbitrarily close to 0 at the expense of the power of the statistical test $1-\beta$, however, as far as I understand, one can not set $\alpha=0$.

I am wondering if there are non-trivial hypothesis tests out there allow one to set $\alpha=0$. I haven't encountered one in my reading. My intuition tells me that there aren't because 1) a hypothesis test must be a threshold-based test; and 2) as long as the probability distributions associated with the hypotheses are different, any non-trivial threshold test (i.e. a test that doesn't always accept the null hypothesis) has some finite chance of falsely rejecting the null hypothesis.

However, I thought I'd ask the experts here whether my intuition, and the reasoning behind this intuition, is correct. Perhaps there are statistical hypothesis tests not based on thresholds out there...

1 Answers 1

4

I am wondering if there are non-trivial hypothesis tests out there allow one to set $\alpha=0$. I haven't encountered one in my reading. My intuition tells me that there aren't because 1) a hypothesis test must be a threshold-based test; and 2) as long as the probability distributions associated with the hypotheses are different, any non-trivial threshold test (i.e. a test that doesn't always accept the null hypothesis) has some finite chance of falsely rejecting the null hypothesis.

I suppose the answer depends on how far your definition of triviality extends.

Let the observation be $X$ where $X$ is a Bernoulli random variable whose parameter $p$ has value $\frac{1}{2}$ when $H_1$ is true and value $1$ when $H_0$ is true (i.e. $X$ is a degenerate random variable that equals $1$ with probability $1$ when $H_0$ is true). Thus the distributions are different. The likelihood ratio is $\Lambda(X) = \frac{p_1(X)}{p_0(X)} = \begin{cases}\infty, & X = 0,\\0.5, & X = 1,\end{cases}$ and so the maximum-likelihood decision rule (a threshold test with threshold $1$) chooses $H_1$ when $X = 0$, and chooses $H_0$ when $X = 1$. The false alarm probability $\alpha$ is $\alpha = P\{H_1 ~\text{chosen}\mid H_0~\text{true}\} = P\{X = 0 \mid H_0~\text{true}\} = 0$ while the power $1-\beta$ is $1-\beta = P\{H_1 ~\text{chosen}\mid H_1~\text{true}\} = P\{X = 0 \mid H_1~\text{true}\} = \frac{1}{2}.$

  • 0
    @Bullmoose Perhaps you need to state it as something like "the likelihood ratio is finite with probability $1$" rather than in terms of Borel sets and $\sigma$-algebras2011-11-29