1
$\begingroup$

I have a series of N Bernoulli tests (p, 1-p).

I need to calculate a probability of passing more than N/2 tests, depending on N and p.

The obvious solution is Chernoff bound: $\varepsilon \leq 2^{-N(p-\frac{1}{2})^2}$, but this is not sufficient for me. I actually need some stronger dependency on p. Is there any available?

I tried fitting Hoeffding, Azuma and Bernstein's inequalities, but it looks like all of these also do not give any sufficient dependency on p.

Is there any convenient estimation?

What I need is something like: $\varepsilon \leq 2^{-N*p}$

  • 0
    p closer to 0. Like, passing a test is highly improbable.2012-10-17

2 Answers 2

1

Did you try the relative-entropy version of Chernoff's bound?

Given $n$ samples, define $\hat p$ to be the empirical probability (i.e. successes divided by total trials). Then for any $q \in [0,1]$ we have

$ P(\hat p \geq q) < \exp(-n * RE(q, p)) $ where the relative entropy $RE(q,p)$ is defined $ RE(q,p) = q \ln \frac{q}{p} + (1-q) \ln \frac{1-q}{1-p} $

The same bound holds for $P(\hat p \leq q)$. In your case you would be interested in $q = 1/2$.

0

If $p$ is tending to $0$, one option is just to use the union bound. There's $\binom{N}{N/2} \leq 2^N$ sets of size $N/2$, each of which occurs with probability at most $p^{N/2}$. It follows that the probability some set of $N/2$ events occur is at most $2^N p^{N/2}.$

This is in general not very tight (if you need any sort of a tight bound, working through a Chernoff bound like David Harris suggested is probably the way to go), but for some applications where you only need the probability to go to $0$ a crude bound like this works nicely.