Well, I'm clearly giving up anything about gcd(0,0), so it's for all nonzero integers a and b... gcd(a,b) = gcd(a-b,b) gcd(a,b) = gcd(a, b%a) [Euclidean algorithm step] [%=remainder] gcd(a,b) = ua + bv for some integers u and v for all positive integers c, gcd(ac,bc) = c * gcd(a,b) lcm(a,b) * gcd(a,b) = |a*b| gcd(a,b) > 0 [so we can divide anything above by gcd(a,b), e.g., gcd(a/gcd(a,b),b/gcd(a,b)) = 1] What do you get and give up by defining gcd(0,0) = 0 ? I can see that for some algorithms, it's fine to define gcd(0,0)=0 rather than leave it undefined, just as it's fine to define n/0 = n in some cases, but you might sometimes want to throw an exception when you evaluate it so you don't naively wander off into a computation where you end up with garbage. On 2014-02-10 12:32, Henry Baker wrote:
What are the rules/axioms/theorems that you want to preserve, and which ones do you want to give up ?
At 09:29 AM 2/10/2014, Mike Speciner wrote:
I propose to leave it undefined, like 0/0 or 0**0.
On 2014-02-10 12:23, Henry Baker wrote:
Ok, what do you propose for gcd(0,0) ?
At 09:20 AM 2/10/2014, Mike Speciner wrote:
And, as I recall, many early computers said n/0 = n.
If gcd(0,0) = 0, then it is not universally true that gcd(a,b)=gcd(a-b,b).
On 2014-02-10 11:06, Henry Baker wrote:
At 07:08 AM 2/10/2014, Mike Speciner wrote:
gcd(0,0) = 0 ???
I always thought a positive integer was prime iff it had exactly two positive integer divisors. Common Lisp & Maxima say gcd(0,0)=0.