First, let $X_{i}$ be distributed as a Beta distribution with parameters $\alpha$ and $\beta$. Then it has mean $\mu = \frac{\alpha}{(\alpha+\beta)}$ and variance $\sigma^{2} = \frac{\alpha\beta}{(\alpha+\beta)^{2}(\alpha+\beta+1)}$.
Then $\bar{X}_{n} = \frac{1}{n}\sum_{i=1}^{n}X_{i}$, and this will have the same mean as the common mean of all of the $X_{i}$.
The central limit theorem tells us that $ \sqrt{n}\biggl[ \bar{X}_{n} - \mu\biggr] \to \mathcal{N}(0,\sigma^{2}).$
From this knowledge, we can apply the Delta method with the statistic function of interest being $T_{n} = \bar{X}_{n}(1-\bar{X}_{n})$, or more simply $T(x) = x(1-x)$. The Delta method then tells us that \sqrt{n}\biggl[ T(\bar{X}_{n}) - T(\mu)\biggr] \to \mathcal{N}(0,\sigma^{2}[T'(\mu)]^{2}).
Now, T'(x) = 1-2x, and so [T'(\mu)]^{2} = [1-\frac{2\alpha}{(\alpha + \beta)}]^{2} = 4\biggl[\frac{\alpha}{(\alpha+\beta)}\biggr]^{2} -4\biggl[\frac{\alpha}{(\alpha+\beta)}\biggr] + 1.
So, the variance for the distribution of $\sqrt{n}[T-T(\mu)]$ is given by: $ \sigma_{T}^{2} = \biggl(\frac{\alpha\beta}{(\alpha+\beta)^{2}(\alpha+\beta+1)}\biggr)\biggl(4\biggl[\frac{\alpha}{(\alpha+\beta)}\biggr]^{2} -4\biggl[\frac{\alpha}{(\alpha+\beta)}\biggr] + 1\biggr).$
Hopefully you can take it from there. All that remains is adjusting the given distribution of $\sqrt{n}[T-T(\mu)]$ to get just the distribution of $T$, and this should be discussed anywhere that the CLT is discussed.
As for the 'intuition' behind this method, it is a very similar idea to a transformation of variables. The Wikipedia proof for the univariate case does a good job of showing what happens.
When you expand the function $g(X)$ around the point $\theta$ and you assume only a linear approximation, then what you're left with as a scale factor for the term $(X-\theta)$ is g'(\theta), which just comes from simple Taylor series approximation. Dividing both sides by g'(\theta) leaves you with $X-\theta$ on the right hand side, which is something with a known asymptotic distribution. That means all the stuff on the left hand side has to have that same asymptotic distribution. Multiplying by g'(\theta) then gives the result. This also shows why the assumption that g'(x) is not $0$ at $\theta$ (although this is not strictly necessary if you make higher order arguments).