1
$\begingroup$

I have to provide a Moment estimator for $θ$ of a random sample $X_1,X_2,…,X_n$ which is given by $X ∼ U(0,θ)$ $(θ > 0)$ using Method of moments.

My first approach trying to solve the assignment was to find out that $U(0,θ)$ might be a continuous uniform distribution on the interval $(a,b)$, written as $X ∼ U(a,b)$.

When I have a random sample $X_1,X_2,…,X_n$ from $X$ than the moment number k is defined by $$ E(X^k) = \frac{1}{n}\sum_{i=1}^{n}X_i^k $$ In my distribution model I have one unknown parameter $θ$ so I set up an equation for the first moment. $$ E(X) = \mu = \frac{1}{n}\sum_{i=1}^{n}X_i = \bar{X_n} $$ The expected value for a uniform distribution $X ∼ U(a,b)$ is $$ E(X) = \frac{a+b}{2} $$ So by rearranging the first moment equation for $U(a,b)$, I want to find $b = θ$ where $a=0$. $$ E(X) = \bar{X_n} = \frac{a+b}{2} \Rightarrow b = θ = 2 \bar{X_n} $$ The estimator $\hat{θ}$ for $X ∼ U(0,θ)$ might be $2 \bar{X_n}$.

Is my solution correct or does anyone have corrections or feedback? Thank you.


Edit 1 and Edit 2 and Edit 3:

I also have to answer if the estimator $2\bar{X_n}$ is (1) unbiased or (2) consistent.

(1) Apparently an estimator $\hat{θ}$ for a parameter $θ$ on a random sample size n is unbiased, if $$ E_θ (\hat{θ_n}) = θ $$

So after setting $\hat{θ_n} = 2\bar{X_n}$ into the formula, I get $$ E_θ(2\bar{X_n}) = E\Bigg(2 \cdot \frac{1}{n}\sum_{i=1}^{n}X_i \Bigg) = \frac{2}{n}\sum_{i=1}^{n}E(X_i) = \frac{2}{n} \cdot n \cdot E(X) = 2 \cdot E(X) $$

Rearranging the equation for the expected value delivers the same, as we can see $$ \frac{a+b}{2} = E(X) \Rightarrow b = 2 \cdot E(X) + a \Rightarrow b = 2 \cdot E(X) $$

See that $E_θ (\hat{θ_n}) = θ$ is fulfilled for $\hat{θ_n} = 2\bar{X_n}$ and $θ = 2 \cdot E(X)$. Hence $\hat{θ_n}$ is unbiased.

(2) The estimator $\hat{θ}$ is consistent for $θ$, if $$ \hat{θ_n} \xrightarrow{P} θ \quad for \quad n \rightarrow \infty $$

We could use convergence in probability from the Law of large numbers which is defined as:

If ${X_n}$ is a sequence of random variables and $X$ is another one, then $X_n$ converges in probability to $X$ for $\epsilon > 0$. We can write. $$ \lim\limits_{n \to \infty}P(|X_n-X| \ge \epsilon) = 0 $$ We can set for $X_n = \hat{θ_n}$ and for $X = θ$. We want to show that $$ \lim\limits_{n \to \infty}P(|\hat{θ_n}-θ| \ge \epsilon) = 0 $$ $$ P(|\hat{θ_n}-θ| \ge \epsilon) = P(|2\bar{X_n}-2 \cdot E(X)| \ge \epsilon)= $$ $$ P\Bigg(\Big|2 \cdot \Bigg(\frac{1}{n}\sum_{i=1}^{n}E(X_i)\Bigg) - 2 \cdot E(X)\Big| \ge \epsilon \Bigg) = P\Bigg(\Big|\frac{2}{n} \cdot n \cdot E(X_i) - 2 \cdot E(X)\Big| \ge \epsilon \Bigg)= $$ $$ P(|2 \cdot E(X) - 2 \cdot E(X)| \ge \epsilon) = P(|0| \ge \epsilon) $$ So we get for $\epsilon > 0$ $$ P(|0| \ge \epsilon) = 0 $$ The probability is $0$ because $0$ is not greater or equal a positive number $\epsilon$. Therefore the estimator is consistent.

I am quite unsure if my solution for estimator consistency is right. Can anybody review my approach, please?

  • 0
    Your approach is correct.2017-01-14
  • 1
    Each supplementary question in your Edit has a direct answer, obvious to anybody having read at least once in their life a textbook giving the relevant definitions. Did you not?2017-01-14
  • 0
    @Did The definition I added in my first edit is from a book I am using. I added my current approach. Can you review it, please?2017-01-14
  • 0
    It seems absurd. To begin with, what does $$ P\Bigg(\Big|2 \cdot \Bigg(\frac{1}{n}\sum_{i=1}^{n}E(X_i)\Bigg) - 2 \cdot E(X)\Big|\Bigg)$$ even mean?2017-01-15
  • 0
    @Did I simplified the last equation, and leaving the inequality for $\epsilon$. I think I didn't get it. …2017-01-16
  • 1
    The formula which I copied in my last comment seems structured like $$P(A)$$ for some event $A$, except that there is no event there since you basically write something like $$P(42.1)$$2017-01-16
  • 0
    @Did oh I see. Yes, I should not leave it. But I added the inequality in the probability after simplifying and calculating it to $0$.2017-01-16
  • 0
    Then simply correct the absurd passage. Until you do, there is no way to help you.2017-01-16
  • 0
    @Dig Now I have added the full inequality event in the probability.2017-01-16
  • 1
    And we are making some progress because now, we know that (at least part of) the problem is that you think that the events $$A_n=\left\{|2\bar{X_n}-2 \cdot E(X)| \ge \epsilon\right\}$$ and $$A'_n=\left\{\Big|2 \cdot \Bigg(\frac{1}{n}\sum_{i=1}^{n}E(X_i)\Bigg) - 2 \cdot E(X)\Big| \ge \epsilon\right\}$$ coincide. Do you see why, in fact, $A_n\ne A'_n$?2017-01-16
  • 0
    @Dig I might by mistake have thought it is the same as I used for the unbiasedness prove. $$ E_θ(2\bar{X_n}) = E\Bigg(2 \cdot \frac{1}{n}\sum_{i=1}^{n}X_i \Bigg) = \frac{2}{n}\sum_{i=1}^{n}E(X_i) $$ $A_n$ does not have an expected value, but I have used one in $A_n'$ for the sum. …2017-01-16

1 Answers 1

1

The method of moments estimator is $\hat \theta_n = 2\bar X_n,$ and it is unbiased. It has a finite variance (which decreases with increasing $n$) and so it is also consistent; that is, it converges in probability to $\theta.$

I have not checked your proof of consistency, which seems inelegant and incorrect (for one thing, the $\epsilon$ disappears in the second line). You should be able to use a straightforward application of Chebyshev's inequality to show that $\lim_{n \rightarrow \infty}P(|\hat \theta_n - \theta| <\epsilon) = 1.$

However, $\hat \theta_n$ does not have the minimum variance among unbiased estimators. The maximum likelihood estimator is the maximum of the $n$ values $X_i$ (often denoted $X_{(n)}).$ The estimator $T = cX_{(n)},$ where $c$ is constant depending on $n,$ is unbiased and has minimum variance among unbiased estimators (UMVUE).

Both estimators are illustrated below for $n = 10$ and $\theta = 5$ by simulations in R statistical software. With a 100,000 iterations means and variances should be accurate to about two places. They are not difficult to find analytically.

 m = 10^5;  n = 10;  th = 5
 x = runif(m*n, 0, th)
 DTA = matrix(x, nrow=m)  # m x n matrix, each row a sample of 10
 a = rowMeans(DTA)        # vector of m sample means
 w = apply(DTA, 1, max)   # vector of m maximums
 MM = 2*a;  UMVUE = ((n+1)/n)*w
 mean(MM);  var(MM)
 ## 5.003658    # consistent with unbiasedness of MM
 ## 0.8341769   # relatively large variance
 mean(UMVUE); var(UMVUE)
 ## 5.002337    # consistent with unbiasedness of UMVUE
 ## 0.207824    # relatively small variance

The histograms below illustrate the larger variance of the method of moments estimator.

enter image description here

  • 0
    did you review my unbiasedness prove? Is it correct? Thank you for your comprehensive answer.2017-01-15
  • 0
    Proof of unbiasedness OK: summarizes to $E(\hat \theta) = E(2\bar X) = 2E(\bar X) = 2(\theta/2) = \theta.$2017-01-16
  • 0
    No idea what theorems you have studied: One proof of consistency summarizes as $\bar X_n \stackrel{P}{\rightarrow} \mu_X = \theta/2,$ so $\hat \theta_n = 2\bar X_n \stackrel{P}{\rightarrow} 2\mu_X = \theta.$2017-01-16