3
$\begingroup$

(deprecated-taken back based on discussion(OLD)) What is a good way to factor a symmetric matrix $X$ as an outer product of two vectors $u$ and $v$. i.e, Find two vectors $u$ and $v$ such that $X=uv^T$, where $X$ is a symmetric matrix.

(Updated/ New Question of interest(NEW)) Given a symmetric matrix $X$, what is a way to figure out the best possible vectors $u$ and $v$ such that the error under say an l2 loss over $X-uv^T$ is minimum. Feel free to make notes about any optimality conditions/ assumptions that might go around this problem.

  • 1
    This is nearly identical to [your previous question](http://math.stackexchange.com/q/102178/856) with the word "diagonal" replaced with "symmetric". As before, this is not possible in general. This time, it is because first of all $uv^T$ is only symmetric if $u = v$, and secondly if $X$ has rank $>1$, for example $X = \begin{bmatrix}1&0\\0&1\end{bmatrix}$, there is no $u$ such that $X = uu^T$.2012-01-25
  • 1
    However, you can write your matrix as a sum of outer products of vectors, i.e. $X = \sum_{i = 1}^r \alpha_i u_i u_i^T$, where the least $r$ possible is the rank of the matrix. Do you want to ask about this instead?2012-01-25
  • 0
    @ Rahul, thank you for your answer. What if I would like to have an approximation of the matrix X? What is a way to figure out the best possible u,v under say an l2 loss over X-u.v^T.2012-01-25
  • 0
    @Calle- I believe that is a spectral(eigen) decomposition.2012-01-25
  • 0
    If you want to change the Question, per your reply to @RahulNarain, then please edit the Question to reflect that change. An Answer along the lines of Calle's comment would be forthcoming.2012-01-25
  • 0
    @hardmath Question re-edited.Thanks for the direction.2012-01-25

1 Answers 1

6

Since $X$ is symmetric it is always possible to put it in diagonal form using an orthonormal basis, i.e., $X = UDU^T$, where $D = \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)$ are the eigenvalues of $X$. From this it is possible to see that $$X = \sum_{i = 1}^n \lambda_i u_i u_i^T$$ where $u_i$ is the $i$:th column of $U$.

Now, assuming the eigenvalues are ordered by absolute value, $|\lambda_1| \geq |\lambda_2| \geq \dots \geq |\lambda_n|$, the best approximation $\tilde X = u v^T$, in the sense that $\|X - \tilde X\|_2$ is minimized ($\| \cdot \|_2$ is the Frobenius norm) is given by $$\tilde X = \lambda_1 u_1 u_1^T$$ (you can of course write $v = \lambda_1 u_1$ if you want).

  • 0
    +1, nice answer. However, the Frobenius norm is typically denoted $\lVert\cdot\rVert_F$. The notation $\lVert\cdot\rVert_2$ denotes the operator norm induced by the $2$-norm on vectors, $\lVert A\rVert_2 = \max_{x\ne0}\lVert Ax\rVert_2/\lVert x\rVert_2$.2012-01-26
  • 0
    Sometimes this is true, sometimes not (e.g. Horn and Johson uses $\| \cdot \|_2$ for the Frobenius norm in their book _Matrix Analysis_).2012-01-26
  • 0
    A truncation of the outer product sum is what's often referred to as a "low rank approximation". See [here](http://math.stackexchange.com/a/92176) for one practical application.2012-01-26
  • 0
    @ all. Thank you. Do look at the edited question in http://math.stackexchange.com/questions/102321/suitable-loss-function-for-order-preserving-factoring-of-a-matrix . It is in a similar setting, but the question deals with a suitable loss function- for the problem. Thank you.2012-01-26