0
$\begingroup$

Maximum a posteriori estimator is a Bayes estimator under the 0-1 loss function and some given prior distribution. I was wondering how to give an estimation that is best in some sense if the prior distribution is also random?

For example, $T_1(X)$ is the maximum a posteriori estimator of parameter $\theta$ under some prior distribution $p_1$ on $\theta$, $T_2(X)$ is the maximum a posteriori estimator of parameter $\theta$ under some prior distribution $p_2$ on $\theta$, and the prior distribution $p_1$ occurs with probability $q$ and $p_2$ with $1-q$.

How shall one determine the best non-randomized estimator in some kind of sense?

How shall one determine the best randomized estimator in some kind of sense? A "randomized" estimator is represented by a distribution on the set of all possible estimators.

Are there some references? Thanks and regards!

  • 0
    So you collapsed the distributions on priors into a single prior. I don't know if it is an equivalent situation. If yes, I wonder how to find some "best" estimators for this new prior based on $T_1$ and $T_2$?2011-09-12

1 Answers 1

1

One plausible way is to remove the uncertainty associated with the priors by integrating out them. That is, for finding the formal Bayes rule we minimize the posterior risk $r(\delta|x)=\int_{\Omega} L(\theta,\delta(x))d\mu_{\Theta|X}(\theta|x)$ where $\Omega$ is the parameter space, and $\mu_{\Theta|X}(\theta|x)$ is the posterior distribution.

Now suppose we are dealing with random priors. That is suppose $\mathcal{P}$ is the class of all priors on the measure space $(\Omega,\tau)$, and let $\nu$ be a probability measure on the measure space $(\mathcal{P},\alpha)$. Then minimize $\int_{\mathcal{P}}r(\delta|x)d\nu$

Note: Whether you are dealing with a randomized decision rule or a randomized one doesn't alter the method of estimation.