$\newcommand{\var}{\operatorname{var}}$ $\newcommand{\E}{\mathbb{E}}$
Your notation is confusing: you use $x$ to refer to two different things, and you seem to use the lower-case $\bar x$ to refer to the sample mean after using capital letters to refer to random variables initially.
Remember that the variance of a random variable is equal to the expected value of its square minus the square of its expected value. That enables us to find the expected value of its square if we know it variance and its expected value.
I surmise that by $\hat\theta$ you mean $(X_1+\cdots+X_n)/n$. That makes $\hat\theta$ an unbiased estimator of $\theta$.
So $\E(\hat\theta) = \theta$ and $ \var(\hat\theta) = \var\left( \frac{X_1+\cdots+X_n}{n} \right) = \frac{1}{n^2}\var(X_1+\cdots+X_n) = \frac{1}{n^2}(\var(X_1)+\cdots+\var(X_n)) $ $ =\frac{1}{n^2}\cdot n\var(X_1) = \frac 1 n \var(X_1) = \frac 1 n \theta(1-\theta). $
Now we want $\mathbb{E}(\hat\theta(1-\hat\theta))$: $ \mathbb{E}(\hat\theta(1-\hat\theta)) = \mathbb{E}(\hat\theta) - \mathbb{E}(\hat\theta^2) = \theta - \Big( \var(\hat\theta) + \left(\E(\hat\theta)\right)^2 \Big) = \theta - \left( \frac{\theta(1-\theta)}{n} + \theta^2 \right) $ $ = \frac{n\theta - \theta(1-\theta) - n\theta^2}{n} = \frac{n-1}{n}\theta(1-\theta). $
From this you can draw a conclusion about whether $\hat\theta(1-\hat\theta)$ is an unbiased estimator of $\theta(1-\theta)$.
(By the way, $\hat\theta(1-\hat\theta)$ is the maximum-likelihood estimator of $\theta(1-\theta)$.)