10
$\begingroup$

I am given a $J \times J$ symmetric matrix $\Sigma$. For a subset of $\{1, ..., J\}$, $v$, let $\Sigma_{v}$ denote the associated square submatrix. I am in need of an efficient scheme for inverting $\Sigma_v$ for potentially any subset $v$. Essentially, I will have to invert many different submatricies $\Sigma_v$, but I won't know which ones I need to invert in advance of running some program; I would rather invest in a good matrix decomposition at the outset, if one exists (or otherwise get whatever information necessary, if not a decomposition).

I've messed around with the eigen decomposition a little bit, but wasn't able to coerce the inverse of a submatrix out of it.

UPDATE: Apparently the term for the type of submatricies I want to invert is "principal submatrix". I'm wondering if I can't make some progress on this via the Cholesky decomposition. Suppose I'm content to calculate $\Sigma_{jj} ^ {-1}$ for any $j \in \{1, 2, ..., J\}$, where $\Sigma_{jj}$ denotes the submatrix obtained by deleting rows/columns greater than $j$. Write $\Sigma = LL^T$ and let $Q = L^{-1}$. Write $ L = \begin{pmatrix}L_1 & \mathbf 0 \\ B & L_2\end{pmatrix}, Q = \begin{pmatrix}Q_1 & \mathbf 0 \\ C & Q_2\end{pmatrix} $ where $L_1$ and $Q_1$ and $j \times j$. It follows that $\Sigma_{jj} = L_1 L_1^T$ and $Q_1 = L_1 ^{-1}$ so that $\Sigma_{jj} ^{-1} = Q_1^T Q_1$. So, once I have the Cholesky decomposition I have the inverses of the leading principal submatricies. This doesn't solve the problem as stated since I may need to deal with other principal submatricies, but it should be a useful partial solution.

  • 0
    @user66612: I would be very happy to see it.2013-03-20

3 Answers 3

8

I imagine the best you can do is only one rank at a time building up or down to attain the desired sub-matrix portion. To reduce the dimension by one the simple formula is

$\mathbf{A}^{-1} = E - \frac{\mathbf{f}\mathbf{g}^T}{h}$

To see this, use the known inverse of the original and larger dimension matrix $\pmatrix{\mathbf A & \mathbf b\\ \mathbf c^T & d}^{-1} = \pmatrix{\mathbf E & \mathbf f\\ \mathbf g^T & h}$

to have $\pmatrix{\mathbf E & \mathbf f\\ \mathbf g^T & h}\pmatrix{\mathbf A & \mathbf b\\ \mathbf c^T & d} = \pmatrix{ \mathbf I & \mathbf 0 \\ \mathbf 0^T & 1}$

Now to find the quantity $A^{-1}$, simply left multiply the equation with $\pmatrix{\mathbf{I} & -\mathbf{f}\frac{1}{h} \\ \mathbf{0}^T & 1}$ giving $ \pmatrix{\mathbf{E}- \mathbf{f}\frac{1}{h}\mathbf{g}^T & \mathbf{0} \\ \mathbf{g}^T & h}\pmatrix{\mathbf{A} & \mathbf{b} \\ \mathbf{c}^T & d} = \pmatrix{\mathbf{I} & -\mathbf{f}\frac{1}{h} \\ \mathbf{0}^T & 1}$ The upper left portion of this matrix equation is $\left( \mathbf{E} - \mathbf{f}\frac{1}{h}\mathbf{g}^T\right)\mathbf{A} = \mathbf{I}$ and shows the formula.

To go the other direction (adding a row and column) you can use what is called the bordering method as described in this answer

  • 0
    For searching purposes: the very useful formula featured here is the Sherman-Morrison-Woodbury formula.2013-04-03
0

If we note $\Sigma_j = \Sigma_{[1..j-1]\cup[j+1..J]}$ the matrix $\Sigma$ with deletion of the $j$th row and column, the formulae for computing the updated Cholesky decomposition are : $\Sigma = \begin{bmatrix} A_{11}&A_{12}&A_{13}\\A_{12}^T&A_{22}&A_{23}\\A_{13}^T&A_{23}^T&A_{33} \end{bmatrix} $ $L = \begin{bmatrix} L_{11}&L_{12}&L_{13}\\0&L_{22}&L_{23}\\0&0&L_{33} \end{bmatrix} $

$\Sigma_j = \begin{bmatrix} A_{11}&A_{13}\\A_{13}^T&A_{33} \end{bmatrix} $

$L_j = \begin{bmatrix} L_{11}&L_{13}\\0&\mathsf{cholupdate}(L_{33},L_{23})\end{bmatrix} $ where $\mathsf{cholupdate}(A,B) = \mathsf{chol}(A^TA+B^TB)$. These computations perform in $O(n^2)$, so if you have $K$ deletions, the computation cost is $O(Kn^2)$.

0

The solution adam W proposed generalizes to rank-$n$ updates; it is not necessary to arrive by repeated rank-1 updates. In his derivation, we can consider $h$ a matrix instead of a scalar, and replace the bottom-right-hand 1 with an identity matrix and $\frac{1}{h}$ with $h^{-1}$. It still works.