Given a (symmetric) positive definite matrix ${\bf A}\in\mathbb{R}^{N\times N}$, I know that it can always be expressed as a sum of $N$ rank-one matrices using the singular value decomposition ${\bf U\Sigma V^\ast}$ of the corresponding Cholesky matrix $\bf L$: $${\bf A} = {\bf L}{\bf L}^{\rm T} = {\bf U\Sigma V^\ast}({\bf U\Sigma V^\ast})^{\rm T} = \sum_{i,j=1}^N \sigma_i \sigma_j {\bf u}_i {\bf v}^{\rm T}_i {\bf v}_j {\bf u}^{\rm T}_i= \sum_{i=1}^N \sigma_i^2 {\bf u}_i {\bf u}^{\rm T}_i,$$ with ${\bf u}_i$ being the $i$-th column of $\bf U$. So the matrix terms here are all outer products (dyads) of a vector with itself.
Can such a matrix also be expressed as the sum of a diagonal positive definite matrix $\bf D$ of full rank with $D_{ii} = d_i^2 > 0$ and a series of $M$ outer products ${\bf x}{\bf x}^{\rm T}$ (${\bf x}\in \mathbb{R}^N$)? That is:
$${\bf A} = {\bf D} + \sum_{k=1}^M {\bf x}_k{\bf x}_k^{\rm T}$$
Basically, this is just one large system of $N(N+1)/2$ quadratic equations with $N(M+1)$ unknowns $\{d_i, x_{k,i}\}$. Using small test values for $N$ and $M$, I was able to find numerical solutions to this equation system for specific example matrices, but it would be nice to have an analytic solution. I suspect this should always be possible, at least for large enough values of $M$.
If so, what is the minimal value of $M$ for which such a decomposition exists? What can be said about the relationship of ${\bf d}={\rm diag}({\bf D})$ and the different ${\bf x}_k$? Is there some $M$ for which the decomposition is unique?
If not, how can a counterexample be constructed?
Some thoughts on this:
- for $N=1$, this is trivial, as one can always choose $\bf D=A$ and $M=0$
- for $N\geq2$, I thought of choosing $D_{ii} = A_{ii}$ as a starting point and applying the Cholesky/SVD strategy on the difference ${\bf A}-{\bf D}$ to get the ${\bf x}_k$. However, for $N=2$ at least, this doesn't work, since the difference matrix is not positive definite. So I guess a related question is: For which $N\times N$ diagonal matrices ${\bf D}$ does ${\bf A}-{\bf D}$ remain positive definite?
- from a geometric perspective, a positive definite matrix ${\bf A}$ defines an $N$-dimensional ellipsoid $\{{\bf z}\in\mathbb{R}:{\bf z}^{\rm T}{\bf A}{\bf z}={\rm const.}\}$. If the matrix is diagonal, the semi-axes of this ellipsoid are aligned with the coordinate axes. In constrast, an outer-product-like term ${\bf x}{\bf x}^{\rm T}$ defines a pair of hyperplanes $\{{\bf z}\in\mathbb{R}:{\bf z}^{\rm T}{\bf x}{\bf x}^{\rm T}{\bf z}=\|{\bf z}^{\rm T}{\bf x}\|^2={\rm const.}\}$. So the above problem is equivalent to asking whether any $N$-dimensional ellipsoid can be understood as a "superposition" of an axis-aligned ellipsoid and a series of hyperplane pairs, although the meaning of "superposition" is somewhat vague in this context...
- the above system of equations can be written using the tensor product $\otimes$: $ \left(\sum_{i,j} a_{i,j}\,{\bf e}_i\otimes{\bf e}_j\right) = \left(\sum_i d_i^2\,{\bf e}_i\otimes{\bf e}_i\right) + \sum_k \left(\sum_i x_{k,i}{\bf e}_i\right) \otimes \left(\sum_j x_{k,j}{\bf e}_j\right) $. Maybe some tensor algebra can come in handy?