H.Havermann>In 1985. I, for one, was inspired by his effort. In June 1999, Mathematica spewed out 20 million terms for me on my personal computer and in October 2000, 53 million terms. And if I could up that number (alas, my current machine is seven years old), I would. ;) _______________________________________________ es>What pi algorithm did you use, arctan, AGM, Ramanujan's 1/pi, etc? Or, did you just use whatever was built into Mathematica? -- Gene --------------------------- The relative cost of the pi definition is insignificant compared to the efficiency of the c.f. conversion algorithm. Straightforward algorithms are hopeless. I am impressed that Mathematica has solved this, presumably with a divide-and-conquer approach designed around FFT multiply. Has anyone listed the largest terms in Hans's data? The GM?. The "density equivalent radix"? E.g., the lg(1+r) distribution predicts the information in a typical term to be about one digit in base 11. More testimony to the irrelevance of decimal! I actually checked my value by converting to the 8.5M/8.5M rational approximant (grouping terms pairwise--the opposite of divide-and-conquer). This was on a Lisp machine with "unlimited" precision rational arithmetic. What actually came out of the Ramanujan series was round(pi*2^58000000). (And before that, an enormous rational with a multimillion factorial gcd.) I've probably bored you all before with the tale of how desk-checking managed to uncover a bug that eluded my programmed error-checking with a (naive) probability of 2^-63. --rwg If pi bore any simple relationship to e, its c.f. would exhibit regular growth. But maybe we just haven't looked far enough.-)