When we have a symmetric matrix $A = LL^*$, we can obtain L using Cholesky decomposition of $A$ ($L^*$ is $L$ transposed).
Can anyone tell me how we can get this same $L$ using SVD or Eigen decomposition?
Thank you.
When we have a symmetric matrix $A = LL^*$, we can obtain L using Cholesky decomposition of $A$ ($L^*$ is $L$ transposed).
Can anyone tell me how we can get this same $L$ using SVD or Eigen decomposition?
Thank you.
I am not sure why anyone would want to obtain a Cholesky decomposition from a SVD or an eigen-decomposition, but anyway, let's say $A$ is positive definite:
Several people in this thread asked why you would ever want to do Cholesky on a non-positive-definite matrix. I thought I'd mention a case would motivate this question.
Cholesky decomposition is used to generate Gaussian random variants given a covariance matrix using $x_i = \sum_j L_{ij} z_j$ where each $z_j ~ Normal(0,1)$ and $L$ is the Cholesky decomposition of the covariance matrix.
A problem arises when the covariance matrix is de-generate, when the random variation described by the covariance in contained in a lower dimensional space. One or more of the Eigenvalues is zero, the matrix is not positive-definite, calls to Cholesky decomposition routines fail. When you are near this case, things also tend to be extremely sensitive to numeric round-off (i.e., the covariance is ill-conditioned).
There shouldn't be any inherent problem with generating points on this "flat" Gaussian, but the textbook algorithm based on Cholesky breaks.
Eigen decomposition can be used as an alternative for this problem, if you have a robust implementation. Some Eigen decomposition algorithms don't do well in this case either, but there are Eigen algorithms that are robust.
There is an interesting relationship between the eigen-decomposition of a symmetric matrix and its Cholesky factor: Say $A = L L'$ with $L$ the Cholesky factor, and $A = E D E'$ the eigen-decompostion. Then the eigen-decompostion of $L$ is $L= E D^{\frac{1}{2}} F$, with $F$ some orthogonal matrix, i.e. the Cholesky factor is a rotated form of the matrix of eigenvectors scaled by the diagonal matrix of sqaure-root eigen-values. So you can get $L$ from $E D^{\frac{1}{2}}$ through a series of orthogonal rotations aimed at making the elements above the diagonal zero.
or you use the LU decomposition.
Anyhow, you don't normally calculate the cholesky decomposition from the eigendecomposition or svd - you use gaussian elimination. See something like Matrix Computations.
Cholesky $A=LL^*$:
SVD $A=U\Sigma V*$
Provided you can apply SVD (A is Positive Definite), it gives $A = \sum \lambda_i v_i v_i^T$ where $v_i$ is a unit eigenvector. This is because A is symmetric.
If you take $x_i = \sqrt{\lambda_i}v_i$, (\lambda_i >0 as A is PD). Then take $X = [x_i]$, i.e. each column of $X$ is one of the $x_i$. Then $A = \sum x_i x_i^T = X X^T $
(To prove that $\sum x_i x_i^T = X X^T$, use the block multiplication property, with each $x_i$ treated as a block)
In practice, it's probably faster to use Gaussian Elimination.
If you have the SVD of a positive semi-definite matrix you can easily rewrite this to $L L^*$. However, this isn't the $L$ the cholesky composition would have computed.
$\begin{align} A &= U\Sigma V^* && \textrm{SVD def.}\\ U &= V &&\textrm{since A is symmetric} \\ A &= \left(U \sqrt{\Sigma}\right) \left(U \sqrt{\Sigma}\right)^* && \textrm{note that $\sqrt{\Sigma}$ is easily computed as it's diagonal} \\ A &= L L^* && \textrm{with...}\\ L &= U \sqrt{\Sigma} \end{align}$
Though this isn't the same $L$ as cholesky computes (since it's not triangular), it does satisty $A = L L^*$ which may be enough to be useful to some.
There is an interesting a priori argument why there is no formula that derives the SVD from LU (other than of course something trivial):