1
$\begingroup$

I just asked this question on stackoverflow.com and had it closed before I could get any reasonable help, with the suggestion to move it to a math site. I don't understand the math, and don't speak the math speak, and need an algorithm, not simply math jargon for the answer.

With a, b, and c as known values, % being the typical computer modulus operator, and with a and b relatively prime, how do I compute a legitimate x? I know that x exists, that's how c came into being... I just need to solve for x.

Someone on stackoverflow pointed me to wikipedia on Linear Diophantine equations for a solution to ax + by = c where x and y are integers (which is almost a re-write of ax % b = c, I need ax - by = c), but I don't know what half this stuff means, or how to implement it in code.

"Let g be the greatest common divisor of a and b. Both terms in ax + by are divisible by g; therefore, c must also be divisible by g, or the equation has no solutions. By dividing both sides by c/g, the equation can be reduced to Bezout's identity sa + tb = g where s and t can be found by the extended Euclidean algorithm."

Ok, that's great in theory... but. How do I use such a thing to find s and t? And even further, how would I use s to find x?

Thanks.

  • 0
    @Steven: Multiply a x + b y = 1 by c to get ac x + bc y = c, so (ac x % b) = c. See my answer for a specific example. Note: you must begin your comments @user if you want a comment author to be notified.2011-04-17

3 Answers 3

4

Here it is as C-style code.

You can see the Euclidean algorithm at work: p and q get reduced while we simultaneously keep track of both p and q as linear combinations of a and b. In the end this gives us exactly the numbers we need. The comments should allow you to understand it enough to debug it in whatever your favorite language is. The assert statements are basically asserting that c indeed comes from an a*x%b calculation with the given a and b.

I will assume a>=0 and b>0.

if (a == b) {   assert(c==0);   return 0; (* any value of x satisfies a*x % b == c in this case *) } int pa = 1, pb = 0, p = a, qa = 0, qb = 1, q = b; (* p always == pa*a+pb*b *) while (p > 0 && q > 0) {   if (q > p) { (* might be false on first pass, but then always true *)     m = q / p;  (* integer arithmetic, assuming positive p,q *)     q -= m * p;     qa -= m * pa;     qb -= m * pb;   }   if (q > 0) { (* q must be less than p *)     m = p / q;     p -= m * q;     pa -= m * qa;     pb -= m * qb;   } } (* now one of p or q is zero, and the other is the gcd *) assert(c % (p+q) == 0); (* c should be a multiple of the gcd, p+q *) m = c / (p+q); if (p > 0) (* p==pa*a+pb*b, and m*p==c, so c==m*pa*a+m*pb*b, so x=m*pa *)   return pa > 0 ? m*pa % b : b - (m*-pa) % b; else   return qa > 0 ? m*qa % b : b - (m*-qa) % b; 
  • 0
    @Steven Even if you can't upvote it, you certainly can check the checkmark next to it to accept it as the answer to your question.2011-04-17
2

Here, for example, is the extended Euclidean algorithm in Python. Apply that to your numbers $a$ and $b$, to get what they call $u_1$, $u_2$, $u_3$ corresponding to your $s$, $t$, $g$. Then $x=s \cdot (c/g)$ (where $c/g$ must be an integer since you knew that there was a solution).

  • 0
    OK. Sounds fine. :)2011-04-16
2

Below is an example of the extended Euclidean algorithm from one of my prior posts. It computes the Bezout representation for $\rm\:gcd(80,62) = 2\ $ viz. $\ 7\cdot 80 - 9\cdot 62\ = 2\:.\:$ Now, to solve $\rm\ 80\ x\ mod\ 62\ =\ c\ $ simply multiply the Bezout equation by $\rm\:c/2\:$ if $\rm\:c\:$ is even (if $\rm\:c\:$ is odd there is no solution). For example, for $\rm\:c = 6\:$ we scale Bezout by $\rm\:3\:$ yielding $\ 21\cdot 80 - 27\cdot 62\ =\ 6\ $ therefore $\rm\ 21\cdot 80\ mod\ 62\ =\ 6\:.$ This Bezout scaling method works generally.

For example, to solve  mx + ny = gcd(x,y) one begins with two rows  [m   1    0], [n   0    1], representing the two equations  m = 1m + 0n,  n = 0m + 1n. Then one executes the Euclidean algorithm on the numbers in the first column, doing the same operations in parallel on the other columns,  Here is an example:  d =  x(80) + y(62)  proceeds as:                        in equation form   | in row form                     ---------------------+------------                     80 =   1(80) + 0(62) | 80   1   0                     62 =   0(80) + 1(62) | 62   0   1  row1 -   row2  ->  18 =   1(80) - 1(62) | 18   1  -1  row2 - 3 row3  ->   8 =  -3(80) + 4(62) |  8  -3   4  row3 - 2 row4  ->   2 =   7(80) - 9(62) |  2   7  -9  row4 - 4 row5  ->   0 = -31(80) -40(62) |  0 -31  40  Above the row operations are those resulting from applying the Euclidean algorithm to the numbers in the first column,          row1 row2 row3 row4 row5 namely:  80,  62,  18,   8,   2  = Euclidean remainder sequence                |    | for example   62-3(18) = 8, the 2nd step in Euclidean algorithm  becomes:   row2 -3 row3 = row4  on the identity-augmented matrix.  In effect we have row-reduced the first two rows to the last two. The matrix effecting the reduction is in the bottom right corner. It starts as the identity, and is multiplied by each elementary row operation matrix, hence it accumulates the product of all the row operations, namely:         [  7 -9] [ 80  1  0]  =  [2   7  -9]        [-31 40] [ 62  0  1]     [0 -31  40]  The 1st row is the particular  solution: 2 =   7(80) -  9(62) The 2nd row is the homogeneous solution: 0 = -31(80) + 40(62), so the general solution is any linear combination of the two:         n row1 + m row2  ->  2n = (7n-31m) 80 + (40m-9n) 62  The same row/column reduction techniques tackle arbitrary systems of linear Diophantine equations. Such techniques generalize easily to similar coefficient rings possessing a Euclidean algorithm, e.g. polynomial rings F[x] over a field,  Gaussian integers Z[i]. There are many analogous interesting methods, e.g. search on keywords: Hermite / Smith normal form,  invariant factors, lattice basis reduction, continued fractions, Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.