0
$\begingroup$

Assuming I have, on a secondary memory like SSD, a matrix $A \in {\rm I\!R}^{n\times n}$ that is very large and cannot be stored on the main memory. I want to compute a (virtually) upper triangular matrix $U$ by row pivoting of its LU-Decomposition to solve for $Ax = b$, where $b$ is present and stored in the main memory. Now, my main memory limitation only allows me to store $U$, specifically the nontrivial part of $U$, as well as $b$ and enough workspace for computations.

I have been told I can use a 1-dimensional array say $s$ of maximum size $\frac{n(n+1)}{2}+2n$, to store all nonzero elements of $U$ on the main memory (one row at a time). This way, I can store only part of the original $A$ on the main memory and yet be able to solve for $Ax=b$.

I'm thinking a (possibly tweaked) version of gaussian elimination with partial row pivoting could be what I need. Using an augmented matrix $Ab$ and reduce it as $L^{-1}(Ab) = (U, c)$ where $(U, c)$ is a virtually upper triangular. For example: $$ (U, c) = \left[\begin{array}{rrrrr|r} & & 3 & 3 & 3 & 3 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ & & & & 5 & 5 \\ & & 2 & 2 & 2 & 2 \\ & & & 4 & 4 & 4 \end{array}\right] $$ then: $s = \left(3 ~ 3 ~ 3 ~ 3 ~ 1 ~ 1 ~ 1 ~ 1 ~ 1 ~ 1 ~ 5 ~ 5 ~ 2 ~ 2 ~ 2 ~ 2 ~ 2 ~ 4 ~ 4 ~ 4\right)$

My question is, are there any existing method or algorithm that would determine $U$ and store its nonzero elements without the need to have the entire $A$ loaded to the main memory?

Thanks

edit: a posible and partial algorithm as follows: https://i.stack.imgur.com/FTLRl.png

  • 0
    use \times instead of x in $\mathbb R^{n\times n}$2017-02-20
  • 0
    Thanks for the reminder @zwim. I just edited my post to reflect the correct formatting.2017-02-20

1 Answers 1

0

I don't know if such algorithms exists, but generally a we prefer reducing the dimension of the sub-problems by having block tridiagonal matrices, like below.

Tridiagonal matrix

With $B_i$ being a square matrix of $\mathbb R^{s_i\times s_i}$ where $S=\sum\limits_{i=1}^n (s_i)^2$ is as small as possible.

If $A_i=0$ and $C_i=0$ we have a block diagonal matrix and it is immediate that solving $Ax=y$ is transformed in solving $n$ smaller systems $B_i\tilde x=\tilde y$.

If not, then $A_i$ and $C_i$ are respectively upper triangular and lower triangular so as to still concentrate the problem in solving $B_i\tilde x=\tilde y$ and since $A,C$ are triangular it is immediate to go back to the original $x,y$.

These kind of matrices naturally arises in problems involving solving physical prolems over meshed objects, since a node in the mesh have only a finite number of neighbours.

Similarly if the matrix generated by your system is sparse try to take advantage of the geometry of the system to collect the zeros in a determined area of the matrix.


In case your system is not sparse or does not reduce easily to a block matrix then I see the need for a line by line Gauss elimination.

Though I do not see the need for an LU factorization in this case ? If you still have to load $L$ and $U$ lines by lines then maybe it is preferable to solve $Ax=y$ directly via the modified Gauss elimination because the loading time will be largely superior than any computation time arising in the Gauss elimination.

  • 0
    Thanks for your reply, @zwim. The system I will be using is not easily reduced into block matrix, unfortunately, and that’s why I was trying to find a way to modify Gaussian elimination to perhaps load a part of my system $A$ at a time, process and save, then load more… This may sound silly but are there any specifics I need to consider before applying a line by line Gaussian elimination?2017-02-20
  • 0
    Oh, also I have the following partial algorithm that I was told should theoretically do what I what, with the exception it will store the entire $U$, and not only the nontrivial part of it. I'm having difficulties understanding this algorithm and whether it is the correct algorithm:2017-02-20
  • 0
    @dddx I have only practiced with block matrices, so I can only answer with generalities for your case.2017-02-20
  • 0
    Thanks @zwim hope someone can find the time to have a look at my algorithm and see if we can make sense of it...2017-02-20