Robert M. Corless
Thu Feb 29 16:42:01 PST 1996
Some time after I wrote ``Continued Fractions and Chaos'' I remembered that there was an extensive analysis of the connexion between continued fractions and Euclid's algorithm for computing GCD's in Knuth's The Art of Computer Programming, vol 2 (Seminumerical Programming). Section 4.5.3 in particular details the connexion and uses it to analyse the complexity of the Euclidean algorithm.
Knuth's description of the connexion is far superior to mine, and I urge the reader to consult Knuth's book.
Here is a brief Maple version of the algorithm, which assumes with no loss of generality that its input is two positive integers a and b with a > b.
k := 1; r[0] := a; r[1] := b; while r[k] > 0 do n[k] := floor( r[k-1]/r[k] ); r[k+1] := rem(r[k-1], r[k]); k := k + 1; od;
Using the relation
it's easy to see that the above do-loop simultaneously computes the entries in the simple continued fraction for and performs the Euclidean algorithm to compute the GCD of a and b (which will be the last nonzero remainder . See [] for more details.Additionally, it turns out that my original statements in ``Continued Fractions and Chaos'' had it backward: it is the properties of continued fractions that are used to prove that, in the worst case, the Euclidean algorithm terminates after at most ceil steps, where is the golden ratio, where . In ``Continued Fractions and Chaos'', I claimed that the running time of the Euclidean algorithm gave us this result for continued fractions (perhaps one could prove it this way, as I thought that I had). This is a result that Knuth attributes to Lamé from 1845, so my rather blasé `it is easy to see' sounds rather weak.
However, I think at least the basic idea really is `easy to see'. The basic idea I had in mind when writing ``Continued Fractions and Chaos'', which is the same as that used by Knuth (I haven't seen Lamé's paper so I don't know if Knuth made improvements) is that the slowest the algorithm can go is if the quotients that occur are 1 every time: with q = 1. This gives us Fibonacci numbers and the bound above. According to Knuth, and I believe him, Lamé's result is the first `applied' use of Fibonacci numbers in the literature.
The average case complexity of Euclid's algorithm, on the other hand, is formidably difficult (I had not known that), and was not fully resolved until Knuth himself put the final nail in, in 1976. Further, there are intriguing connections with Gauss' work and ergodic theory--- which I mention later on in ``Continued Fractions and Chaos''---on p. 355 of Knuth's book we see the Gauss measure being used. More on this later, though. Now back to the main paper.