2
$\begingroup$

Suppose we are given a random vector $\mathbf x \in \mathbf{R}^n$ that has a multivariate normal distribution $\mathbf x \sim N(\mathbf \mu_0, \mathbf \Sigma_0)$, and we are also given a positive semidefinite matrix $\mathbf \Sigma_1\in \mathbf {R}^{n \times n}$.

Does there always exist a matrix $\mathbf A$ such that $\mathbf A\mathbf x = \mathbf y \sim N(\mu_0, \mathbf\Sigma_1)$? In other words, can we always find a linear transformation of $\mathbf x$ such that the resulting covariance matrix equals any positive semidefinite matrix?

We know that the covariance matrix of $\mathbf{A}x$ is going to be $\mathbf{A\Sigma_0A^T}$, so it seems that this question boils down to solving for $\mathbf A$ in the equation $\mathbf{A\Sigma_0A^T}= \mathbf{\Sigma_1}$, which I find myself not knowing how to do.

How can I solve this equation? Also, if my notation or terminology here is incorrect (or if there are conditions I need to solve this problem), please let me know.

1 Answers 1

2

If $\Sigma_0$ and $\Sigma_1$ are positive definite then there exist lower triangular, non-singular matrices $C_0$ and $C_1$ such that

$$C_0 C_0^T = \Sigma_0 , \\ C_1 C_1^T = \Sigma_1.$$

We also have $C_0^{-1} \Sigma_0 (C_0^T)^{-1} = I.$

Hence,

$$C_1C_0^{-1} \Sigma_0 (C_0^T)^{-1}C_1^T = C_1C_0^{-1} \Sigma_0 (C_1C_0^{-1})^{T} = \Sigma_1.$$

There can be a problem with positive semidefinite matrices and the remedy is to introduce a small perturbation to restore positive definiteness.

For more information look up Cholesky decomposition.