0
$\begingroup$

The definition of an unbiased estimator of $\theta$ is a statistic $\hat\theta \equiv \hat\theta(X_1,...,X_n)$ that has an expected value equal to $\theta$ : $\Bbb E$$\hat\theta = \theta, \forall \theta \in \Theta, X_i \in P_\theta$.

The question is, if we are trying to get the expected value of $\hat\theta$ we should consider it as a random value in some probability space $(\Omega_\hat\theta, \Sigma_\hat\theta, P_\hat\theta)$. Then expected value is an integral by probability measure : $$\int_{\Omega_\hat\theta} \hat\theta(\omega) \,P_\hat\theta(d\omega)$$

So, what is $P_\hat\theta$?

My guess is that $\forall i$ $X_i$ is a random value from $(\Omega_\theta, \Sigma_\theta, P_\theta)$ to $\Bbb R$, so $X_{[n]} = (X_1,...,X_n)$ is a random vector from $(\underbrace{\Omega_\theta\times\dots\times\Omega_\theta}_\text{n times}, \Sigma_{X_{[n]}}, P_{X_{[n]}})$ to $\Bbb R^n$
Then $\hat\theta$ is a statistics so it is a function from $(X_1,...,X_n)$ and it can be considered as a random value in $(\Bbb R^n, \Sigma_{\hat\theta(X_{[n]})}, P_{\hat\theta(X_{[n]})})$ (or not, point if I'm wrong).

But what is $P_{\hat\theta(X_{[n]})}$, is it equal to $P_\theta$ or not and how to prove it?

1 Answers 1

1

Some parts of what you wrote do not makes sense.

You are right in saying that each $X_i$ is a random variable from $(\Omega_{\theta},\Sigma_{\theta},P_{\theta})$ to $\mathbb{R}$. $X_{[n]}$ should not be defined on a new probability space but on the same probability space as each $X_i$, namely $(\Omega_{\theta},\Sigma_{\theta},P_{\theta})$.

If we consider each $X_i:\Omega_{\theta}\to\mathbb{R}$ as a measurable map then $\hat{\theta}$ is actually a composition of two maps $f:\mathbb{R}^n\to\mathbb{R}$ and $X_{[n]}:\Omega_{\theta}\to\mathbb{R}^n$. In other words $\hat{\theta}=f\circ X_{[n]}:\Omega_{\theta}\to\mathbb{R}$. What $f$ actually is will depend on what estimator you want. For example, if you want to estimate the mean in an unbiased way then you might choose $f$ to be given by $f(x_1,\dots,x_n)=\frac{1}{n}\sum_{i=1}^n x_i$.

The single probability space $(\Omega_{\theta},\Sigma_{\theta},P_{\theta})$ on which each $X_i$ is defined is the only probability space we need. The abstract expected value of $\hat{\theta}$ is just $\mathbb{E}\hat{\theta}=\int_{\Omega_{\theta}}\hat{\theta}\,dP_{\theta}$.