3
$\begingroup$

I derived ML estimator for Rician distributed data and I am trying to show Rao-Cramer lower bound of $\hat{A}$ estimator variance.

$f(x_k|A,\sigma) = \frac{x_k}{\sigma^2}\exp\left(-\frac{x^2_k+A^2}{2\sigma^2}\right)I_0\left(\frac{xA}{\sigma^2}\right) \tag{Rician distribution}$

where $I_n(x) = \left( {1 \over 2}x \right )^{n} \sum_{k=0}^{\infty} \frac{\left( {1 \over 4}x^2 \right )^k}{k! (n+k)!}$ is modified Bessel function of the first kind of $n$-th order ($n \in N$)

The ML estimator, that I derived:

$\hat{A} = \frac{1}{N} \sum_{k=1}^{N} x_k  \frac{  I_1 \left( \frac{x_k A}{\sigma^2} \right )}{   I_0 \left( \frac{x_k A}{\sigma^2} \right )}$

Unfortunately, variance derivation looks like challenging problem. Anyone could suggest a trial?

$\operatorname{Var}_\hat{A} = \frac{1}{N^2}E_{\hat{A}} \left( \sum_{k,l=1}^N\left( x_k-E_{\hat{A}}\left(x_k\frac{I_1}{I_0}\right) \right) \left( x_l-E_{\hat{A}}\left(x_l\frac{I_1}{I_0}\right) \right) \right)$

where $I_0, I_1$ is of course $I_0 \left( \frac{x_k A }{\sigma^2} \right )$, $I_1 \left( \frac{x_k A }{\sigma^2} \right )$, respectively.

  • 0
    @leonbloy - Yes, It's no closed form of $\hat{A}$2012-11-17

1 Answers 1

2

I hope I have understood what you are trying to do... If you are trying to show that it achieves the Cramer-Rao Lower Bound then:

1) According to Mathematica there seems to be no closed form solution for the Fisher information, so if it does achieve the bound the variance will have no closed form. Attempting an expression for the variance is not the way forwards...

However

2) from part of a theorem of Amari[1] (I assume others too) if there is to be any unbiased estimator that achieves this bound at all, then it is necessary but not sufficient for

$\mathbb{E}\left[\frac{\partial^2}{\partial A^2}\log f(x;A)\frac{\partial}{\partial A}\log f(x;A) + \left(\frac{\partial}{\partial A}\log f(x;A) \right)^3\right]= \Gamma^{(m)}(A) = 0$

which, according to Mathematica, is not zero. Here's a plot of $\Gamma^{(m)}(A) $ for $\sigma = 1$

Plot of <span class=\Gamma(A) with respect to A">

Another condition (which along with the previous one is sufficient) for the bound to hold in equality is that the distribution belong to an exponential family, i.e. be expressible in the form (for some parameters $\xi_i$)

$ f(x;\xi)=\exp\left(C(x) + \sum_i \xi_i F_i(x) - \psi(\xi)\right)$

Which is impossible as it would require expressing the Bessel function $I_0(kAx)$ in the product form $\exp (C(x) - \log x) \exp (\phi(A))$ which you can't.

So, you wont be able to show that $\hat{A}$ reaches the lower bound, because no unbiased estimator of A does. If it is biased (more likely than not) then you should be talking about squared errors, not the variance.

1: Methods of Information Geometry S3.5 
  • 0
    The need for an exponential family is quite well known, I expect you will find it in any sufficiently detailed book on estimation. I'm not sure whether the $\Gamma$ condition is due to Amari or not - he doesn't cite anyone, but he is really bad at it in general. Unfortunately I do not know the history of this field enough to be able to tell you any more, I just happen to know how to do this from knowing information geometry, I'm not very widely read in estimation theory in general.2012-11-17