A while back I've asked a question on the relationship of the total variation distance between probability measures to hypothesis testing and got a very nice answer. I understand that that answer gives a trade-off relationship between the probability of type I error (false positive) $\alpha$ and a type II error (miss) $\beta$, similar to what Neyman-Pearson lemma provides.
Within the Neyman-Pearson framework, one can set $\alpha$ arbitrarily close to 0 at the expense of the power of the statistical test $1-\beta$, however, as far as I understand, one can not set $\alpha=0$.
I am wondering if there are non-trivial hypothesis tests out there allow one to set $\alpha=0$. I haven't encountered one in my reading. My intuition tells me that there aren't because 1) a hypothesis test must be a threshold-based test; and 2) as long as the probability distributions associated with the hypotheses are different, any non-trivial threshold test (i.e. a test that doesn't always accept the null hypothesis) has some finite chance of falsely rejecting the null hypothesis.
However, I thought I'd ask the experts here whether my intuition, and the reasoning behind this intuition, is correct. Perhaps there are statistical hypothesis tests not based on thresholds out there...