Kullback-Leibler divergence (a.k.a. relative entropy) has a nice property in hypothesis testing: given some observed measurement $m\in \mathcal{Q}$, and two probability distributions $P_0$ and $P_1$ defined over measurement space $\mathcal{Q}$, if $H_0$ is the hypothesis that $m$ was generated from $P_0$ and $H_1$ is the hypothesis that $m$ was generated from $P_1$, then the Type I and Type II errors are related as follows:
$d(\alpha,\beta)\leq D(P_0\|P_1)$
where
$D(P_0\|P_1)=\sum_{x\in\mathcal{Q}}P_0(x)\log_2\left(\frac{P_0(x)}{P_1(x)}\right)$
is the Kullback-Leibler divergence,
$d(\alpha,\beta)=\alpha\log_2\frac{\alpha}{1-\beta}+(1-\alpha)\log_2\frac{1-\alpha}{\beta}$
is called binary relative entropy, and $\alpha$ and $\beta$ are probabilities of Type I and Type II errors, respectively.
This relationship allows one to bound the probabilities of Type I and Type II errors.
I am wondering if something similar exists for Total Variation distance:
$TV(P_0,P_1)=\frac{1}{2}\sum_{x\in\mathcal{Q}}\left| P_0(x)-P_1(x)\right|$
I am aware that
$2(TV(P_0,P_1)^2\leq D(P_0\|P_1)$
Is there more?
Unfortunately, I am not very well-versed in hypothesis testing and statistics (I know the basics and have pretty good background in probability theory). Any help would be appreciated.