I am already very familiar with the method outlined in using the Cramer-Rao bound to find the approximate variance of a bernoulli trial.
I am interested in the same problem; however, I would like to use the following Corollary to solve this problem:
Let $X_1, \dots, X_n$ be iid $f(x \mid \theta)$, where $f(x \mid \theta)$ satisfies the conditions of the Cramer-Rao Theorem [i.e., the inequality]. Let $L$ denote the likelihood function, and $\ell$ be the corresponding loglikelihood function. If $W(\mathbf{X}) = W(X_1, \dots, X_n)$ is any unbiased estimator of $\tau(\theta)$, then $W(\mathbf{X})$ attains the Cramer-Rao Lower Bound if and only if $$a(\theta)[W(\mathbf{x})-\tau(\theta)] = \dfrac{\partial \ell}{\partial \theta}$$ for some function $a(\theta)$.
Suppose $X_1, \dots, X_n \overset{\text{iid}}{\sim} \text{Bernoulli}(p)$. We have $$L(p) = \prod_{i=1}^{n}p^{x_i}(1-p)^{1-x_i} = p^{\sum_{i=1}^{n}x_i}(1-p)^{n-\sum_{i=1}^{n}x_i}\text{.}$$ I have already shown that the MLE of $p$ is $\hat{p} = \bar{X}$, the arithmetic mean of the $X_1, \dots, X_n$, and that it is unbiased.
Using $L$, we compute the loglikelihood, $$\ell(p) = \sum_{i=1}^{n}x_i\cdot \log(p) + \left(n - \sum_{i=1}^{n}x_i\right)\log(1-p) $$ with $$\dfrac{\partial \ell}{\partial p} = \dfrac{\sum_{i=1}^{n}x_i}{p}-\dfrac{n - \sum_{i=1}^{n}x_i}{1-p}\text{.}$$ Now, I understand I'm going to need to write this in the form $$a(\theta)(\bar{X}-p) $$ but it isn't clear how to do this. We have $$\dfrac{\partial \ell}{\partial p} = \dfrac{n}{p}\left[\dfrac{\sum_{i=1}^{n}x_i}{n}-\dfrac{p(n - \sum_{i=1}^{n}x_i)}{n(1-p)}\right] = \dfrac{n}{p}\left[\bar{X}-\dfrac{p(n - \sum_{i=1}^{n}x_i)}{n(1-p)}\right]\text{,}$$ not quite what I'm looking for.