6
$\begingroup$

I want to solve system of linear equations of size 300x300 (300 variables & 300 equations) exactly (not with floating point, aka dgesl, but with fractions in answer). All coefficients in this system are integer (e.g. 32 bit), there is only 1 solution to it. There are non-zero constant terms in right column (b).

A*x = b
  • where A is matrix of coefficients, b is vector of constant terms.
  • answer is x vector, given in rational numbers (fractions of pairs of very long integers).

The matrix A is dense (general case), but it can have up to 60 % of zero coefficients.

What is the best way (fastest algorithm for x86/x86_64) to solve such system?

Thanks!

PS typical answer of such systems have integers in fraction up to 50-80 digits long, so, please don't suggest anything based only on float/doubles. They have no needed precision.

  • 0
    The problem I need is to solve EXACTLY (get not a floating point, but fraction number). There is a LINPACK with dgesl/sgesl, but it solve only in floating point (inexact), I don't know any library for fractional solution.2011-02-02
  • 4
    I wonder what's special about your matrices that makes results have only 80 digits long denominators. I tried random 300x300 60% sparse matrices with integers between 1 and 10, and typical denominators are around 500 digits long2011-02-03
  • 0
    Yaroslav Bulatov, You are right. 80 digits was with 100x100 easy test matrix with -100..+100 and high level of zeros (~80%). On example 100x100 with random in range +-2 000 000 000 i got pairs of numbers up to 1000 digits in length.2011-02-03

7 Answers 7

0

The Gaussian elimination algorithm is simple to program.

Define a rational number object which can be a list of numerator factors and a list of denominator factors. Write functions for doing the four arithmetic operations using the rational number objects. In your Gaussian elimination program, substitute an array of the rational number objects for the array of integers, and use your functions for the calculations.

You may or may not want the number objects to automatically reduce to simplest terms after a computation.

I don't think it would be fast, but it's pretty straight forward, and I don't think the execution time would be long enough to be a bother, maybe a few seconds.

  • 0
    It is slow and ineffective (the nightmare is adding arbitrary precision numbers, presented as list of their factors), but I want a rather fast algo. The answer from osgx user lists some effective two or three.2011-02-16
5

This was meant to be comment -- in Mathematica it takes me 0.5 seconds for 300x300 matrix with around 60% zeros and 40% positive integers uniformly sampled from 1-10.

To integrate Mathematica library with C, you need to use MathLink interface. I'm not sure if you can statically link it, it may require launching Mathematica's kernel along with your program in order to use the interface.

The relevant function is LinearSolve. Here's the complete code I used to run timing comparison

mat1 = RandomVariate[BernoulliDistribution[.4], {300, 300}];
mat2 = RandomInteger[{1, 10}, {300, 300}];
mat3 = mat1*mat2;
answer = RandomInteger[{1, 10}, 300];
result = LinearSolve[mat3, answer]

I don't know the algorithm they used. They say version 8 is about 1000 times faster that previous version, so it's probably something specialized and proprietary

  • 0
    And it does the arbitary precision with fractionals?2011-02-03
  • 2
    yes, here's the output I got from the code above -- http://pastebin.com/YY5pptcD2011-02-03
  • 0
    Unbelievable! Can you measure the time for matrix with greater integers, e.g. +-100000 (as it was done in your link to wolfram.com) and +-2 000 000 000 ?2011-02-03
  • 0
    Also, http://reference.wolfram.com/mathematica/ref/LinearSolve.html - Can I ask you to change your example (I have no Matematica)? Please, add a Method field and try to set it to every of "CofactorExpansion", "DivisionFreeRowReduction", "OneStepRowReduction", "Cholesky", "Multifrontal" and "Krylov". The fastest one Is the method I'm interested in.2011-02-03
  • 3
    Out of those ones, "OneStepRowReduction" is the fastest. For 300x300 matrix with -2 billion..2 billion entries, Method->Automatic takes about 2 seconds, "OneStepRowReduction" not sure, but more than a minute. Takes about 6 seconds on 150x150 matrix with such entries, whereas Automatic takes 0.2 seconds2011-02-03
  • 1
    Yaroslav Bulatov, thanks. So, Automatic selects some method, which is not listed on the reference.wolfram. :(2011-02-03
  • 0
    That seems to be the case. They may be using a proprietary method which doesn't have a common name. OneStepRow reduction however isn't that bad, about 20 seconds on my laptop for 200x200 matrix with entries in billions and 80% zeros2011-02-03
4

This kind of thing can be done by symbolic computation packages like Maple, Mathematica, Maxima, etc.

If you need a subroutine to do this that you'll call from a larger program, then the answer will depend a lot on the programming language that you're using and what kinds of licensing you're willing to work with.

  • 0
    I want to know a fast algorithm to realize by myself or to integrate into C/C++ program2011-02-03
  • 0
    Why do you suggest a symbolic computation??2011-02-03
  • 1
    You can integrate Mathematica functions into C programs using MathLink interface. It's not free. Using Mathematica's library for your task I get answers with denominators 500 digits long, and it takes about 1/2 second on my laptop2011-02-03
3

For the algorithm, straightforward Gaussian elimination should be fine. This part may actually be simpler than when dealing with floating-point numbers, since you don't have to worry about the numerical stability.

Depending on your matrix, you might be able to do a bit better that Gaussian elimination, e.g. if your matrix is symmetric, you can use a Cholesky factorization. But 60% sparseness isn't that much in the scheme of things. If it were tridiagonal, banded, etc. then you could try some specialized methods.

You will need a good rational number library to handle the actual arithmetic operations. GNU MP should fit the bill, and claims it can do rational numbers, but I've never used it.

  • 0
    I think I need a pivoting, because there can be a zero element on diagonal (in non-singular matrix).2011-02-02
  • 0
    @osgx - Sorry, you're correct. I edited my answer to reflect this. You still don't need some of the "extra" work that a floating-point implementation would have to worry about.2011-02-02
3

The Dixon's algorithm is. It does a inversion of the matrix in modular arithmetic, based on some prime modulus P (this step is easy and exact). Then it solves the trivial system for each x to get a p-adic number representation of the solution. It can be converted to a rational number. Method was described in "Dixon J. D. Exact solution of linear equations using P-adic expansions // Numer. Math., 40, 1982, P. 137-141.", but this article is offline.

There is a article about Dixon's algorithm and its modifications online: http://www2.isye.gatech.edu/~dsteffy/papers/OSLifting.pdf

Author also have all described algorithms implemented and published at http://www2.isye.gatech.edu/~dsteffy/rational/

Another interesting approach is http://en.wikipedia.org/wiki/Montante's_method>Montante's method, which is a gauss variant for integer coefficient matrices. This variant don't create rational numbers in the matrix while converting it to triangular. Take a look at the last picture in link http://en.wikipedia.org/wiki/Montante's_method

2

Don't know if it's the best, but Scipy's linalg.solve works. You could look at the source code...

EDIT for the cases that I tried scipy.linalg.solve(A,b) * scipy.linalg.det(A) gives an exact result, e.g. gives a matrix corresponding to (1/det(A)) * x, so the actual fractional result is each element divided by that common denominator.

  • 1
    is it for exact solution (fractions in the answer)?2011-02-02
  • 0
    Does it give a result as a fp number: 123.4235456 or as fraction like 12142938902/1197772011-02-02
  • 1
    Does it works for large random matrixes (with integer coefficients?)2011-02-02
  • 1
    See this page: See this page: http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html. The determinant would give you the 1/25 and linalg.solve would give you the part with decimals. Multiplying the two gives you the [-232, 129, 19] part. It is not stored as a fraction explicitly, but that wouldn't make much sense given they are all on a common denominator. As for large random matrices, they may not all have solutions.2011-02-02
  • 0
    Even the determinant in that document is computed numerally with floating point: `>>> A = mat('[1 3 5; 2 5 1; 2 3 8]') >>> linalg.det(A) -25.00000000000**0004**`. So It is not for fraction arithmetics.2011-02-02
  • 1
    I'm suggesting a way to get around your problem, and so far as I see it works. Did you try the multiplication of det(A) * x? Did you notice that there is no decimal part? That very small decimal is not real, its a residual from floating point computations, and it disappears when you do the multiplication.2011-02-02
  • 1
    Yes, this residual is small, BUT THE COMPUTATION IS UNEXACT (scupy uses numberally computation). FP can store up to 19 meaningful decimal digits. And I need to solve systems with greater precision (both integers in fractions from real example have up to 50 digits). So, I asking about algorithm capable of solving system in fractoins (in precise rationals), not in imprecise floats.2011-02-02
  • 1
    @osgx: I don't mind that you disagree with me, and you don't need to pick my solution. I just don't find your argument effective in disproving my idea...2011-02-02
  • 0
    +1 since Python provides arbitrary-precision arithmetic.2011-02-02
  • 1
    wok, python does, but which arithmetic is used by SciPy? At moment I viewing at sources of linalg.solve. It does use `GESV` from lapack `" gesv, = get_lapack_funcs(('gesv',), (a1,b1))"`. It does use a sgesv or dgesv function from BLAS, which uses float or double limited-precision math. Even buildin tests in scipy uses "assert_array_almost_equal(numpy.dot(a,x),b) " for testing a "linarg.solve" ("scipy›linalg›tests›test_basic.py"). This scipy is easy to use, but it doesnt meet my major requirement: EXACT solution of System.2011-02-03
  • 1
    @Benjamin, You answered not to my question. I tried to highlight in my question that I don't need a usual System solver, but an EXACT one. Anyone with deep knowledge of linpack will get it from my phrase about a `dgesv`. You solution is dgesv-based.2011-02-03
  • 0
    Is it an exact result? >>> a = [[13433352, 133373426],[193332345, 23333354]] >>> b = [-47343325, 53333245] >>> x = solve(a,b) >>> dot(a,x) array([-47343325.00000001, 53333244.99999999])2011-02-03
  • 0
    @Benjamin, the "linalg.solve" uses floats to compute matrix. But may be there are some other methods of solving matrices in scipy, which uses fractional types to computation?2011-02-03
  • 0
    You could also check out SymPy or Sage for symbolic math similar to Maple/Mathematica2011-02-04
1

Use Gaussian ellimination with only integers. Don't do the division. Instead to zero a paritucar cell in the matrix use euclid algorithm. You may need bignum library.

This is almost equivalent to using rational number gaussian ellimination but simpler.

  • 0
    How can I do a Gaussian in integers when I need to divide?? The Montante's method sounds simpler, than using a gcd. Also, this method is slower than Dixon's, as it generate a very long numbers in the process of gauss elimination.2011-02-16