0
$\begingroup$

I have a sparse matrix $X$ which is $m \times n$, where $m>n$ and I am trying solve least squares problem $Xa=b$. I need to solve it multiple times for different $b$'s so I apply Cholesky decomposition on $Y=X^TX$, and solve $Ya=X^Tb$.

The problem is I need to add $k$ rows ($k < n$) to $X$ where each additional row contains only one nonzero element. Then $Y$ will be $Y^' = X^TX + W^TW$ where $W$ is $k\times n$ matrix consists of additional rows. Since $W^TW$ is a diagonal matrix another way to show is $Y^' = X^TX + Iv$ where $v$ is a vector $n x 1$ and contains nonzero elements of additional rows.

I do not want to recompute decomposition for each time when multiple rows added, I checked Woodbury matrix identity by considering $Y^' = X^TX + W^TW$ but it is not very useful when $k$ is close to $n$. Is there any special case for $Y^' = X^TX + Iv$ (may be something like Sherman–Morrison formula) ?

Also other suggestions will be appreciated (including pointing to related references, changing decomposition or whole approach etc.), thank you.

  • 0
    Try the new http://scicomp.stackexchange.com/questions2012-01-14
  • 0
    For example, see http://scicomp.stackexchange.com/questions/687/up-downdating-methods-for-a-series-of-normal-equations2012-01-14
  • 0
    Thanks for your time and concern, Will Jagy. Really appreciated.2012-02-08

1 Answers 1

0

Your use of notation is quite confusing.

The one thing that would appear to work is $k$ applications of Sherman-Morrison, once for each added row. Adding row $i,$ where $1 \leq i \leq k,$ let the row vector in the Wikipedia article, called $v^T,$ be the new row, and the column vector $u$ be the transpose of that vector.

  • 0
    Thanks Will, and sorry about my notation, it is due to lack of familiarity I guess. It will work, but computational cost will be same as Woodbury matrix approach. However I do not store (or calculate) inverse of the matrix, I just use decomposition when I need to multiply something with the inverse. Using, say, Sherman-Morrison equation I directly calculate solution not the inverse actually. So applying Sherman-Morrison recursively is a little problematic for me.2012-01-11
  • 0
    @longquestion: "It will work, but computational cost will be same as Woodbury matrix approach." - but of course; $k$ successive applications of Sherman-Morrison is in fact equivalent to an application of Woodbury. Why are you using the normal equations anyway? Are you forming $\mathbf X^\top\mathbf X$, or are you in fact using the method of ["seminormal" equations](http://dx.doi.org/10.1016/0024-3795(87)90101-7)?2012-01-12
  • 0
    @J.M. For what it may be worth, the book "Kalman Filtering: Theory and Practice" by Mohinder S. Grewal and Angus P. Andrews, second edition 2001, gives the "normal equation" for least squares of $H x - z,$ on page 9, as $$ \hat{x} = (H^T H)^{-1} \; H^T z $$ as long as $H^T H$ is, in fact, invertible. Some of the OP's peculiar usages fit an engineer being thrown into filter programming without much mathematics training. I made photocopies of their 2007 book on inertial navigation, with Lawrence R. Weill. No actual use to me so far.2012-01-12
  • 0
    @J.M. I just use normal equations because my current library does not support sparse QR decomposition (or any related thing) to solve $Xa=b$. I know normal equation may become singular (although $X$ has independent columns) due to finite precision but it is unlikely to happen in my problem. Also2012-01-12