3
$\begingroup$

At one step of an algorithm (i=1: something big) I have to draw a vector from the normal distribution $X_p^{(i)}\sim N((A^{(i)} + D^{(i)}_p)^{-1}m, (A^{(i)} + D^{(i)}_p)^{-1})$ for $p=1:P$, with a large $P$ (say 5,000).

$A^{(i)}$ is symmetric positive definite and rank $K$, $A^{(i)}$ is $K\times K$ symmetric positive definite with $K$ is much smaller than $P$. $(A^{(i)})^{-1}$ is cheap relative to the rest of the calculation.

So I'd like to efficiently get $(A + D^{(i)}_p)^{-1}$ (or a matrix square root).

I've tried using the matrix inversion lemma (aka Woodbury) to get a quick update from $A^{-1}$ or $(A+D_p)^{-1}$ to $(A+D_{p+1})^{-1}$, but it takes me in circles because it's a full rank update.

It feels like this should have a simple solution since the $D^{(i)}_p$ are all diagonal, but if there is it escapes me...

  • 0
    No problem. I'll confess to a less-than-clear problem statement as well!2011-05-06

1 Answers 1

1

I assume you mean $A$ is symmetric positive semidefinite with rank $k$. If you can do an efficient low rank factorization of $A = U U^T$, then Woodbury update is the way to go for this. Further if you know the rank $k$ before hand then you could actually get a low-rank factorization in $\mathcal{O}(k^2n)$ where $n$ is the size of $A$.

  • 0
    Sorry, I stated the conditions poorly: $A$ is $K\times K$ symmetric positive definite. So like I mentioned the Woodbury identity doesn't buy me much; taking $A = LL^T$ gives me $D_p^{-1}-D_p^{-1}L(I+L^T D_p^{-1}L)^{-1} L^TD_p^{-1}$2011-01-29