This is true if $p \ne q$. A probabilistic approach:
Let $X$ be a random variable distributed as a Binomial$(n,p)$ normalized (divided by $n$, so that its range is in $[0,1]$), or equivalently, $X = \frac{1}{n} \sum_{i=1}^n B_i$ where $B_i$ are Bernoulli with probability $p$.
Let $Y$ be an analogous r.v. from another Binomial $(n,q)$ ($q \ne p$).
Then, informally, $E_n$ is the 'area' below the smallest probability function, and it can be expresed as the sum of two "tails". It only remains to show that each tail starts at some strictly positive distance $\delta$ from the mean (here is where the condition $q \ne p$ is required), and that it's indeed a tail (it does not include the mean). Using Chebyshev's inequality, and showing that the variance of $X$ and $Y$ tend to zero, the result is proved.
BTW: $E_n$ could be thought as the probability of error in a classical two-class bayesian classification problem, where the features are multidimensional iid Bernoulli, and we are required to show that the probability of error tends to zero as the dimension tends to infinity. Duda & Hart has some very similar exercise in chapter 2 (page 78).