$X_1,X_2,\ldots,X_n$ are iid random variables $B(1,\theta)$ where $0< \theta<1$. Let $w = 1$ if $\sum_i X_i = n$ and $0$ otherwise. What is the best unbiased estimator of $\theta^2$.
Attempt:
Would it be w = 1 if sum(X1..Xn) = 2?
$X_1,X_2,\ldots,X_n$ are iid random variables $B(1,\theta)$ where $0< \theta<1$. Let $w = 1$ if $\sum_i X_i = n$ and $0$ otherwise. What is the best unbiased estimator of $\theta^2$.
Attempt:
Would it be w = 1 if sum(X1..Xn) = 2?
Use the Rao–Blackwell theorem. Observe that $E(X_1X_2) = \theta^2$. The minimal sufficient statistic is $X_1+\cdots+X_n$. So the estimator that you want is $E(X_1 X_2 \mid X_1+\cdots+X_n)$. $ \begin{align} & E(X_1 X_2 \mid X_1+\cdots+X_n=x) = \Pr(X_1 X_2 =1 \mid X_1+\cdots+X_n=x) \\ \\ & = \frac{\Pr(X_1 = X_2 =1\ \&\ X_3+\cdots+X_n=x-2)}{\Pr(X_1+\cdots+X_n=x)} \\ \\ \\ & = \frac{\theta^2\dbinom{n-2}{x-2}\theta^{x-2}(1-\theta)^{n-x}}{\dbinom{n}{x}\theta^x(1-\theta)^{n-x}} = \frac{\dbinom{n-2}{x-2}}{\dbinom nx} = \frac{x(x-1)}{n(n-1)}. \end{align} $ Therefore the Rao–Blackwell estimator is $ \frac{(X_1+\cdots+X_n)(X_1+\cdots+X_n-1)}{n(n-1)}. $
If you can then prove that the sufficient statistic is complete, then by the Lehmann–Scheffé theorem, you have the unique best unbiased estimator.
To show that, suppose $g(X_1+\cdots+X_n)$ is an unbiased estimator of $0$. We then have $ 0=E(g(X_1+\cdots+X_n)) = \sum_{x=0}^n g(x) \binom nx \theta^x (1-\theta)^{n-x} $ for all values of $\theta\in[0,1]$. But a polynomial in $\theta$ can be $0$ everywhere in an interval only if the coefficients are all $0$, and that happens only if $g(x)=0$ for all $x\in\{0,1,2,\ldots,n\}$. Hence $X_1+\cdots+X_n$ is a statistic that admits no nontrivial unbiased estimators of $0$, i.e. a complete statistic, so the Lehmann–Scheffé theorem is applicable.
An unbiased estimator of $\theta$ using an i.i.d. sample $(X_i)_{1\leqslant i\leqslant n}$ of Bernoulli random variables of parameter $\theta$ is $U_n=\frac1n\sum\limits_{i=1}^nX_i$. If $n\geqslant2$, an unbiased estimator of $\theta^2$ is $V_n=\frac1{n(n-1)}\sum\limits_{i\ne j}X_iX_j=\frac{n}{n-1}U_n(U_n-\frac1n)$. These are both unbiased because $\mathrm E(X_i)=\theta$ for every $i$ and $\mathrm E(X_iX_j)=\theta^2$ for every $i\ne j$. The maximum likelihood of $\theta$ is $U_n$ hence the maximum likelihood of $\theta^2$ is $U_n^2=\frac1nU_n+\frac{n-1}nV_n$, which is strictly larger than $V_n$ unless $X_i=1$ for every $i$, in which case $U_n=V_n=1$.