1
$\begingroup$

I've read that you should avoid computing a matrix inverse, as you generally don't need to, but I don't know the best way to avoid it. I need to compute:

$x = \mathbf v \mathbf A^{-1}\mathbf v^\top$

where $x$ is a scalar, $\mathbf v$ is a row vector, $\mathbf A$ is a symmetric positive definite matrix (but perhaps with eigenvalues close to $0$) and ${}^\top$ means transpose.

I'm using numpy/scipy so feel free to express an answer using their functions.


EDIT:

Any pros/cons of the least squares approach versus doing an eigenvector decomposition?

  • 0
    @J.M., thanks, speed is not an issue with me so I will go the safe route.2011-11-23

3 Answers 3

2

v*inv(A) is the same as v/A in matlab notation, which uses a linear (least squares) solver rather than calculating the inverse.

So I guess I would use:

numpy.dot( numpy.linalg.lstsq( A, v ), v ) 

I'm not sure if the row versus column will matter to the software, but you can freely transpose things without changing anything important as A is symmetric (so if it cannot detect you passing the wrong kind of vector in, then use numpy.transpose).

If you have many different v for a single A, then you can use:

to get a better solver. I didn't see any such solver built-in to numpy, but things like matlab (and hopefully octave) would automatically switch to a faster solver for the SPD case.

  • 0
    I found [cholesky](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.cholesky.html#numpy.linalg.cholesky) in the parallel directory: [decompositions](http://docs.scipy.org/doc/numpy/reference/routines.linalg.html#decompositions).2011-11-23
2

I've read that you should avoid computing a matrix inverse, as you generally don't need to.

This statement is incomplete. In general, you have no need for the inverse of a matrix as a separate entity. In most cases, you have something of the form $A^{-1}x$, where $x$ is a column vector, and computing $A^{-1}$ is as computationally intensive as computing $A^{-1}x$. So, if you don't need the inverse by itself, it is not worth calculating it in isolation.

Now when computing $A^{-1}x$, your actually computing the solution to $Ax = b$, and all numerical algorithms treat it as such. So, you need to use numpy.linalg.solve, as follows

x = numpy.linalg.solve( A, v ) numpy.dot( v, x ) 

Or, in one line

numpy.dot( v, numpy.linalg.solve( A, v ) ) 

This differs from Jack's solution, in that numpy.linalg.lstsq solves $A x = b$ by minimizing $|b - A x|^2_2$, while numpy.linalg.solve uses an LU decomposition and it assumes that $A$ has full rank. (Being symmetric implies that it is square.)

1

If the computational cost is not an issue, you can find the singular value decomposition of $A=U\Sigma V^T$ and only invert $\Sigma$ then do the multiplication as $x=vU\Sigma^{-1}(Vv)^T$

  • 1
    The $\Sigma^{-1}$ can either be a true inverse or a pseudoinverse (reciprocate those diagonal entries larger than machine epsilon times the matrix norm), of course. Then again, since $A$ is symmetric positive definite, the SVD and the eigendecomposition are one and the same.2011-11-23