4
$\begingroup$

I have a random vector $x=[x_1,x_2,...,x_n]^T$ with prior distribution Normal(0, I).

I have $m$ linear constraints summarized in matrix form $Ax=y$ where $A$ is an $m$ by $n$ matrix and $y$ is an $m$ by 1 matrix. I used Lagrange multipliers technique and derived that given the constraints, the minimum for $\sum_{j=0}^{n}x_{j}^{2}$ occurs at $x_{opt}=Wy$ where $W=A^TQ\Lambda^{-1}Q^T$ and $AA^T=Q\Lambda Q^T$, $Q^TQ=I$, and $\Lambda$ diagonal. My question is: what is $x_{opt}$ called? Does it make sense if I call it $x_{ML}$ ( ML for Maximum Likelihood) ? Is the conditional distribution of $x$ given the constraints also normal ? if yes, isn't $x_{opt}$ the mean of the conditional distribution. So shall I call $x_{opt}$, the expected value for $x$ ?

please see this for a related problem: https://stats.stackexchange.com/questions/9071/intuitive-explanation-of-contribution-to-sum-of-two-normally-distributed-random-v

1 Answers 1

1

What you have calculated using Lagrange multipliers as $x_{opt}$ is essentially a linear MMSE estimate of the vector $x$ in terms of the vector $y$. Since your $x$ is a vector of Gaussians, the conditional distribution of $x$ given $y$ will also be Gaussian. Due to this, the linear MMSE estimate will be the same as the Bayesian MMSE estimate, which is the conditional mean of $x$ given $y$.

To sum up, you should refer to $x_{opt}$ as either the MMSE or the linear MMSE estimate of $x$ conditioned on $y$ instead of the ML estimate.

  • 0
    MMSE: http://en.wikipedia.org/wiki/Minimum_mean_squared_error2011-05-03