1
$\begingroup$

Let $X_1,X_2,X_3,\ldots,X_n$ be a random sample from a $\mathrm{Bernoulli}(\theta)$ distribution with probabilty function

$P(X=x) = (\theta^x)(1 - \theta)^{(1 - x)}$, $x=0,1$; $0<\theta<1$.

Is $\hat\theta(1 - \hat\theta)$ an unbiased estimator of $\theta(1 - \theta)$? Prove or disprove.

I tried $x=\theta(1-\theta)$, $\bar x=\hat\theta(1-\hat\theta)$,

$E[\bar x)=x$

$E[\bar x(1-\bar x)]=E[\bar x]-E[\bar x-1)$

but I'm not sure what to do now or how to prove it. I have an exam tomorrow so any help is really appreciated! Hopefully this is the last stats question I'll have to ask!

  • 0
    could you use $tex$ to edit your post? it is difficult to read it.2012-08-08
  • 1
    relate it to the variance of $\hat \theta$2012-08-08
  • 0
    @SeyhmusGüngören: Please do never use math mode for emphasizing words. *tex* looks better than $tex$...2012-08-08
  • 0
    I think $\TeX$ looks even better.2012-08-08
  • 0
    @SeyhmusGüngören how do I use tex? Or where do I learn to use it. People have to keep editing my questions2012-08-12
  • 0
    @Panda check this : http://en.wikipedia.org/wiki/TeX2012-08-12
  • 0
    @SeyhmusGüngören just used TeX in a question! Looks so much better! Hopefully people won't have to edit my questions too much anymore :)2012-08-14
  • 0
    ehehehe funny.. happy to hear.. It is okay if they edit as long as you use tex)2012-08-14

1 Answers 1

3

$\newcommand{\var}{\operatorname{var}}$ $\newcommand{\E}{\mathbb{E}}$

Your notation is confusing: you use $x$ to refer to two different things, and you seem to use the lower-case $\bar x$ to refer to the sample mean after using capital letters to refer to random variables initially.

Remember that the variance of a random variable is equal to the expected value of its square minus the square of its expected value. That enables us to find the expected value of its square if we know it variance and its expected value.

I surmise that by $\hat\theta$ you mean $(X_1+\cdots+X_n)/n$. That makes $\hat\theta$ an unbiased estimator of $\theta$.

So $\E(\hat\theta) = \theta$ and $$ \var(\hat\theta) = \var\left( \frac{X_1+\cdots+X_n}{n} \right) = \frac{1}{n^2}\var(X_1+\cdots+X_n) = \frac{1}{n^2}(\var(X_1)+\cdots+\var(X_n)) $$ $$ =\frac{1}{n^2}\cdot n\var(X_1) = \frac 1 n \var(X_1) = \frac 1 n \theta(1-\theta). $$

Now we want $\mathbb{E}(\hat\theta(1-\hat\theta))$: $$ \mathbb{E}(\hat\theta(1-\hat\theta)) = \mathbb{E}(\hat\theta) - \mathbb{E}(\hat\theta^2) = \theta - \Big( \var(\hat\theta) + \left(\E(\hat\theta)\right)^2 \Big) = \theta - \left( \frac{\theta(1-\theta)}{n} + \theta^2 \right) $$ $$ = \frac{n\theta - \theta(1-\theta) - n\theta^2}{n} = \frac{n-1}{n}\theta(1-\theta). $$

From this you can draw a conclusion about whether $\hat\theta(1-\hat\theta)$ is an unbiased estimator of $\theta(1-\theta)$.

(By the way, $\hat\theta(1-\hat\theta)$ is the maximum-likelihood estimator of $\theta(1-\theta)$.)

  • 0
    asymptotically unbiased estimator.2012-08-08