Almost all sparse direct solver packages split the factorization into two stages:
symbolic factorization computes an ordering, often using nested dissection or a approximate minimum degree algorithm, such that the factors will be as sparse as possible and allocates space to hold the result. The value of the matrix entries are either not used or only used to make estimates about pivoting.
numeric factorization computes the factorization given the ordering and sparsity computed by symbolic factorization.
In most circumstances, symbolic factorization is much less expensive than numeric factorization, so SAME_NONZERO_PATTERN
offers limited benefit. This changes when running in parallel with many processes because the symbolic factorization does not scale as well (and some popular packages, including MUMPS, compute it in serial).
It is very unlikely that your factor of 100 comes from the symbolic factorization stage. Perhaps you are actually reusing the factors from the previous iteration? For performance questions like this, it usually helps to run with -log_summary
and compare times and load balance for the Mat*FactorSym
and Mat*FactorNum
events. Send the output to petsc-users@mcs.anl.gov or petsc-maint@mcs.anl.gov if you would like help interpreting the output.
As for "updating" the factors after small changes in the matrix entries, attempts to do this have typically been unsuccessful. An alternative which I recommend is to use the old factorization as a preconditioner for the new matrix. For nonlinear problems using SNES, you can experiment using -snes_lag_preconditioner
or -snes_lag_jacobian
. If you use KSP directly, then pass SAME_PRECONDITIONER
to KSPSetOperators()
.