Talk:Binary GCD algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The Implementation section was wrong[edit]

gcd(i32::MAX, i32::MIN + 1) gave the result 1. Since the two inputs are negatives of each other and both are far away from one, the result should be one or the other. gcd(i32::MIN, i32::MAX) even crashes the program because of integer overflow. You cannot use unsigned integer algorithms on signed integers without knowing what's going on. The code didn't even compile without syntax warnings.

Because people will use algorithms found on Wikipedia, I have removed that algorithm. I will attempt to rewrite it correctly. — Chai T. Rex (talk) 11:54, 12 July 2023 (UTC)[reply]

I have rewritten the algorithm to be correct and hidden the explanatory comment on how it avoids some branches, as it no longer applies. Improvements to the algorithm can be tested with the linked program. — Chai T. Rex (talk) 15:55, 12 July 2023 (UTC)[reply]
Yes, the implementation I originally wrote (then adapted for greater legibility) specifically took in u64 (unsigned) integers. That was changed in rev. 1154862632, which introduced less-legible and incorrect code as you found out.
The revision's summary was “the previous code doesn't work, return 0 all the time” which is incorrect (see tests in playground).
Given that, I'm going to revert the changes to the implementation section: correctly handling signed integers introduces complexity which IMO takes away from the didactic & illustrative purpose of an example implementation.
I'll add a note explaining this, and how to express signed-integers GCD in terms of unsigned GCD. nicoo (talk) 12:16, 4 February 2024 (UTC)[reply]
PS: Sorry for the huuuge delay in reaction: I apparently missed all the watched-pages notifications due to the UI changes. nicoo (talk) 12:19, 4 February 2024 (UTC)[reply]