1
$\begingroup$

$X_1,X_2,\ldots,X_n$ are iid random variables $B(1,\theta)$ where $0< \theta<1$. Let $w = 1$ if $\sum_i X_i = n$ and $0$ otherwise. What is the best unbiased estimator of $\theta^2$.

Attempt:

Would it be w = 1 if sum(X1..Xn) = 2?

  • 0
    It is not hard to use TeX. Enclose formulae in between dollar signs. Instead of Xn, write `$X_n$` to render it as $X_n$. There seems to typo in the question. You said `Let w=1 if sum(Xi..Xn)=n`, did you mean "Let w=1 if $\sum_{i=1}^n X_i = n$ ?2011-10-26
  • 0
    I also assume `B(1,theta)` stands for binomial distribution with one degree of freedom and probability $\theta$, which is really just a Bernoulli distribution with probability $\theta$.2011-10-26
  • 0
    Estimating theta (which is in the interval (0,1)) by w (which seems to be either 0 or 1) is odd. And note that the mean of w is not theta^2 unless n=2 (or for one specific value of theta if n>=3).2011-10-26
  • 0
    The question begs for an application of the Rao–Blackwell theorem and the Lehmann–Scheffé theorem. There isn't any reasonable way of showing something is the unique best unbiased estimator other than either that or essentially proving this case of those theorems from scratch.2011-10-26

2 Answers 2

3

Use the Rao–Blackwell theorem. Observe that $E(X_1X_2) = \theta^2$. The minimal sufficient statistic is $X_1+\cdots+X_n$. So the estimator that you want is $E(X_1 X_2 \mid X_1+\cdots+X_n)$. $$ \begin{align} & E(X_1 X_2 \mid X_1+\cdots+X_n=x) = \Pr(X_1 X_2 =1 \mid X_1+\cdots+X_n=x) \\ \\ & = \frac{\Pr(X_1 = X_2 =1\ \&\ X_3+\cdots+X_n=x-2)}{\Pr(X_1+\cdots+X_n=x)} \\ \\ \\ & = \frac{\theta^2\dbinom{n-2}{x-2}\theta^{x-2}(1-\theta)^{n-x}}{\dbinom{n}{x}\theta^x(1-\theta)^{n-x}} = \frac{\dbinom{n-2}{x-2}}{\dbinom nx} = \frac{x(x-1)}{n(n-1)}. \end{align} $$ Therefore the Rao–Blackwell estimator is $$ \frac{(X_1+\cdots+X_n)(X_1+\cdots+X_n-1)}{n(n-1)}. $$

If you can then prove that the sufficient statistic is complete, then by the Lehmann–Scheffé theorem, you have the unique best unbiased estimator.

To show that, suppose $g(X_1+\cdots+X_n)$ is an unbiased estimator of $0$. We then have $$ 0=E(g(X_1+\cdots+X_n)) = \sum_{x=0}^n g(x) \binom nx \theta^x (1-\theta)^{n-x} $$ for all values of $\theta\in[0,1]$. But a polynomial in $\theta$ can be $0$ everywhere in an interval only if the coefficients are all $0$, and that happens only if $g(x)=0$ for all $x\in\{0,1,2,\ldots,n\}$. Hence $X_1+\cdots+X_n$ is a statistic that admits no nontrivial unbiased estimators of $0$, i.e. a complete statistic, so the Lehmann–Scheffé theorem is applicable.

  • 0
    Is X1+...Xn a sufficient statistic for theta^2 or for theta^n?2011-10-26
  • 0
    It's a sufficient statistic for any one-to-one function of $\theta$. For $\theta\in[0,1]$, the function $\theta \mapsto \theta^n$ is one-to-one.2011-10-26
  • 1
    I think the denominator should be (n choose x)2011-10-26
  • 0
    Correct. I've fixed it. The bottom line remains intact.2011-10-26
1

An unbiased estimator of $\theta$ using an i.i.d. sample $(X_i)_{1\leqslant i\leqslant n}$ of Bernoulli random variables of parameter $\theta$ is $U_n=\frac1n\sum\limits_{i=1}^nX_i$. If $n\geqslant2$, an unbiased estimator of $\theta^2$ is $V_n=\frac1{n(n-1)}\sum\limits_{i\ne j}X_iX_j=\frac{n}{n-1}U_n(U_n-\frac1n)$. These are both unbiased because $\mathrm E(X_i)=\theta$ for every $i$ and $\mathrm E(X_iX_j)=\theta^2$ for every $i\ne j$. The maximum likelihood of $\theta$ is $U_n$ hence the maximum likelihood of $\theta^2$ is $U_n^2=\frac1nU_n+\frac{n-1}nV_n$, which is strictly larger than $V_n$ unless $X_i=1$ for every $i$, in which case $U_n=V_n=1$.

  • 0
    This answer does not say how you know $V_n$ is the unique best unbiased estimator of $\theta^2$. The question begs for an application of the Rao–Blackwell theorem and the Lehmann–Scheffé theorem. That's the standard thing for this situation.2011-10-26