I understand how and why the algorithm works, but I am confused by a specific proof of it. I know I have asked a similar question before, but that question was regarding proof of the algorithm. This question is asking for an explanation of a specific proof.
If d divides both a and b, and d = ax + by for some integers x and y, then necessarily d = gcd(a,b)
Proof: By the first two conditions, d is a common divisor of a and b so that it cannot exceed the greatest common divisor; that is, d ≤ gcd (a,b). On the other hand, since gcd(a,b) is a common factor of a and b, it must also divide ax + by = d, which implies gcd (a,b) ≤ d. Putting these together, d = gcd (a,b)
I understand the first part of the proof (d cannot exceed gcd (a,b)), however I am confused by the second part. It states d cannot be less than gcd (a,b), however to me it seems it can. Won't any common divisor (not just the greatest one) divide ax + by? For example, let a be 4, b be 12, x be -2, and y by 1.
Therefore you get: (4)(-2) + (12)(1) = (-8) + (12) = 4.
4 (the greatest common divisor) is not the only one can divide it. Other common divisor such as 2 can also divide ax + by.
Therefore the greatest common divisor is not the only one that fulfills the second part of the proof (so d does not have to be greater than gcd (a,b) to divide ax + by). What am I misunderstanding here?