Given, $ 0
$\begin{array}{l} {E_{1} =min(p_{1} ,q_{1} )+min(1-p_{1} ,1-q_{1} )} \\ {E_{2} =min(p_{1} p_{2} ,q_{1} q_{2} )+min(p_{1} (1-p_{2} ),q_{1} (1-q_{2} ))+} \\ {\quad \quad \; +min((1-p_{1} )p_{2} ,(1-q_{1} )q_{2} )+min((1-p_{1} )(1-p_{2} ),(1-q_{1} )(1-q_{2} ))} \\ {E_{3} =min(p_{1} p_{2} p_{3} ,\; q_{1} q_{2} q_{3} )+min(p_{1} p_{2} (1-p_{3} ),\; q_{1} q_{2} (1-q_{3} ))+......etc.} \end{array}\ $
etc.
Notice that ${E}{}_{k+1}$ is formed by splitting each summand of ${E}{}_{k}$ into two, using ${p}{}_{k+1}$ and ${q}{}_{k+1}$. Notice also that ${E}{}_{k}$ has 2${}^{k}$ terms.
${E}{}_{k}$ is actually twice the Bayes error from two classes with multivariate Bernoulli distributions.
It is easy to prove that ${E}{}_{k+1} \leq {E}{}_{k}$ for all k.
I would like to prove that,
$\forall \varepsilon \;,\; 0<\varepsilon <1, \;if \;\; \forall i\; |p_{i} -q_{i} |\; >\varepsilon \; then\;\; limE_{n\to \infty } =0. $
Any ideas?