[math-fun] Algorithms for high precision computation of things
Hello all! Many months ago, maybe even a couple years ago (I lost count), I wrote a high precision floating point arithmetic library in C. Here's a sample demo of it: WINDOWS: http://dl.dropbox.com/u/734346/rpn.exe [open a command prompt, cd to the right directory, then type "rpn" to see the options] LINUX (32-bit): http://dl.dropbox.com/u/734346/rpn [should work on 64-bit] The LINUX executable might run on OS X. In fact, I'd like to know if it does, if any one could test it! I can assure you it's safe. It was written in ANSI C by me in a very portable fashion. No file writing, no system calls, etc. If you think it's dodgy, then feel free to disassemble it or something. It's an RPN calculator. Basic usage is along the lines of $ ./rpn -p1000 1 tta 4 x which does 4*atan(1) (tta = trig tan arc; x = multiply) to 1000 digits. Anyway, you don't need to run it, and it's not the point of this email. The algorithms I use are here: http://www.symbo1ics.com/files/algs.pdf [Section 3] Any corrections would be helpful. Anyway, I'm looking: 1. to include more algorithms that are known to work well for computing special functions and whatever to high precision. 2. to include improvements to what I have at the moment, algorithmically. Is the way I compute sine slow? Are there faster ways? 3. for criticism on the way I represent big floats. I don't know if there is a better way to express arbitrary precision floats than what I have. I prefer *not* to go the "do everything with rationals then divide" because that's memory intensive and has a good amount of overhead. Of course some algorithms work best by using rationals, but I don't want to use rationals entirely across all algorithms. Mostly I'd like to improve what I have, but also document things I don't yet have, like Gosper's theta function products, and whatever. Thanks all! Robert
I haven't looked at your alg document, but there are some pretty GNU open source multiple precision codes in C available that have already been tested to some very significant extent. What might be more interesting would be a high-quality implementation of Gosper's arbitrary precision continued fraction algorithms which return -- not a number -- but a number 'pump'. You pump the handle & another set of digits comes out. If this were integrated with Maxima, it would be pretty cool. At 02:45 PM 4/7/2011, quad wrote:
Hello all!
Many months ago, maybe even a couple years ago (I lost count), I wrote a high precision floating point arithmetic library in C. Here's a sample demo of it:
WINDOWS: http://dl.dropbox.com/u/734346/rpn.exe [open a command prompt, cd to the right directory, then type "rpn" to see the options]
LINUX (32-bit): http://dl.dropbox.com/u/734346/rpn [should work on 64-bit]
The LINUX executable might run on OS X. In fact, I'd like to know if it does, if any one could test it!
I can assure you it's safe. It was written in ANSI C by me in a very portable fashion. No file writing, no system calls, etc. If you think it's dodgy, then feel free to disassemble it or something.
It's an RPN calculator. Basic usage is along the lines of
$ ./rpn -p1000 1 tta 4 x
which does 4*atan(1) (tta = trig tan arc; x = multiply) to 1000 digits.
Anyway, you don't need to run it, and it's not the point of this email. The algorithms I use are here:
http://www.symbo1ics.com/files/algs.pdf [Section 3]
Any corrections would be helpful.
Anyway, I'm looking:
1. to include more algorithms that are known to work well for computing special functions and whatever to high precision. 2. to include improvements to what I have at the moment, algorithmically. Is the way I compute sine slow? Are there faster ways? 3. for criticism on the way I represent big floats. I don't know if there is a better way to express arbitrary precision floats than what I have. I prefer *not* to go the "do everything with rationals then divide" because that's memory intensive and has a good amount of overhead. Of course some algorithms work best by using rationals, but I don't want to use rationals entirely across all algorithms.
Mostly I'd like to improve what I have, but also document things I don't yet have, like Gosper's theta function products, and whatever.
Thanks all!
Robert
On Thu, Apr 7, 2011 at 4:55 PM, Henry Baker <hbaker1@pipeline.com> wrote:
I haven't looked at your alg document, but there are some pretty GNU open source multiple precision codes in C available that have already been tested to some very significant extent.
Yes, I am well aware (GMP, MPFR, ...). In fact, the name of my library is "Graham Numerical Package" or GNP, which is a play on GMP. :) My main goal was to make an easily compilable/embeddable little system. Mostly a hobby thing.
What might be more interesting would be a high-quality implementation of Gosper's arbitrary precision continued fraction algorithms which return -- not a number -- but a number 'pump'. You pump the handle & another set of digits comes out.
Actually, I began to write little bits of code to do this, albeit in Haskell. Since Haskell is an entirely lazy language, it can handle infinite lists and whatnot. Cf> 20 `termsOf` cfSqrt 39853 [199,1,1,1,2,1,1,4,5,1,2,1,6,36,6,1,2,1,5,4] Cf> 50 `termsOf` cfSqrt 39853 [199,1,1,1,2,1,1,4,5,1,2,1,6,36,6,1,2,1,5,4,1,1,2,1,1,1,398,1,1,1,2,1,1,4,5,1,2,1,6,36,6,1,2,1,5,4,1,1,2,1] Cf> prettyShow $ 50 `termsOf` cfSqrt 39853 "199 + 1/(1 + 1/(1 + 1/(1 + 1/(2 + 1/(1 + 1/(1 + 1/(4 + 1/(5 + 1/(1 + 1/(2 + 1/(1 + 1/(6 + 1/(36 + 1/(6 + 1/(1 + 1/(2 + 1/(1 + 1/(5 + 1/(4 + 1/(1 + 1/(1 + 1/(2 + 1/(1 + 1/(1 + 1/(1 + 1/(398 + 1/(1 + 1/(1 + 1/(1 + 1/(2 + 1/(1 + 1/(1 + 1/(4 + 1/(5 + 1/(1 + 1/(2 + 1/(1 + 1/(6 + 1/(36 + 1/(6 + 1/(1 + 1/(2 + 1/(1 + 1/(5 + 1/(4 + 1/(1 + 1/(1 + 1/(2 + 1/1))))))))))))))))))))))))))))))))))))))))))))))))" Cf> take 10 $ convergents $ 10 `termsOf` cfSqrt 39853 [(199,1),(200,1),(399,2),(599,3),(1597,8),(2196,11),(3793,19),(17368,87),(90633,454),(108001,541)] Cf> map approx $ take 10 $ convergents $ 10 `termsOf` cfSqrt 39853 [199.0,200.0,199.5,199.666666666667,199.625,199.636363636364,199.631578947368,199.632183908046,199.632158590308,199.632162661738] It only does cfSqrt, so it's not quite as interesting as having full Gosperfractionology. FWIW, here's the Haskell program: <<<BEGIN module Cf where approx :: (Integer, Integer) -> Double approx (x,y) = (fromInteger x)/(fromInteger y) prettyShow [] = "" prettyShow [n] = (show n) prettyShow (x : xs@(_:[])) = (show x) ++ " + 1/" ++ (prettyShow xs) prettyShow (x : xs@(_:_:_)) = (show x) ++ " + 1/(" ++ (prettyShow xs) ++ ")" convergents [] = [] convergents [x] = [(x,1)] convergents (a_0 : a_1 : as) = pqs where pqs = (p_0, q_0) : (p_1, q_1) : zipWith3 nextConvergent pqs (tail pqs) as p_0 = a_0 q_0 = 1 p_1 = a_1 * a_0 + 1 q_1 = a_1 nextConvergent (p_i, q_i) (p_j, q_j) a_k = (p_k, q_k) where p_k = a_k * p_j + p_i q_k = a_k * q_j + q_i cfZero :: [Integer] cfZero = repeat $ fromIntegral 0 isqrt = floor . sqrt . fromInteger cfSqrt :: Integer -> [Integer] cfSqrt n | (n == (isqrt n)^2) = [isqrt n] ++ cfZero | otherwise = [a_i | (_, _, a_i) <- mdas] where mdas = iterate nextTriple (m_0, d_0, a_0) m_0 = 0 d_0 = 1 a_0 = truncate $ sqrt $ fromIntegral n nextTriple (m_i, d_i, a_i) = (m_j, d_j, a_j) where m_j = d_i*a_i - m_i d_j = (n - m_j*m_j) `div` d_i a_j = (a_0 + m_j) `div` d_j -- nice idiom termsOf = take eachTermOf = map <<<END
If this were integrated with Maxima, it would be pretty cool.
At 02:45 PM 4/7/2011, quad wrote:
Hello all!
Many months ago, maybe even a couple years ago (I lost count), I wrote a high precision floating point arithmetic library in C. Here's a sample demo of it:
WINDOWS: http://dl.dropbox.com/u/734346/rpn.exe [open a command prompt, cd to the right directory, then type "rpn" to see the options]
LINUX (32-bit): http://dl.dropbox.com/u/734346/rpn [should work on 64-bit]
The LINUX executable might run on OS X. In fact, I'd like to know if it does, if any one could test it!
I can assure you it's safe. It was written in ANSI C by me in a very portable fashion. No file writing, no system calls, etc. If you think it's dodgy, then feel free to disassemble it or something.
It's an RPN calculator. Basic usage is along the lines of
$ ./rpn -p1000 1 tta 4 x
which does 4*atan(1) (tta = trig tan arc; x = multiply) to 1000 digits.
Anyway, you don't need to run it, and it's not the point of this email. The algorithms I use are here:
http://www.symbo1ics.com/files/algs.pdf [Section 3]
Any corrections would be helpful.
Anyway, I'm looking:
1. to include more algorithms that are known to work well for computing special functions and whatever to high precision. 2. to include improvements to what I have at the moment, algorithmically. Is the way I compute sine slow? Are there faster ways? 3. for criticism on the way I represent big floats. I don't know if there is a better way to express arbitrary precision floats than what I have. I prefer *not* to go the "do everything with rationals then divide" because that's memory intensive and has a good amount of overhead. Of course some algorithms work best by using rationals, but I don't want to use rationals entirely across all algorithms.
Mostly I'd like to improve what I have, but also document things I don't yet have, like Gosper's theta function products, and whatever.
Thanks all!
Robert
* quad <quadricode@gmail.com> [Apr 08. 2011 08:20]:
Hello all!
Many months ago, maybe even a couple years ago (I lost count), I wrote a high precision floating point arithmetic library in C. Here's a sample demo of it:
[...]
Anyway, you don't need to run it, and it's not the point of this email. The algorithms I use are here:
http://www.symbo1ics.com/files/algs.pdf [Section 3]
Any corrections would be helpful.
The iterations for reciprocal and a-th roots on page 8 should be written in a form that keeps the term (1-x*y^a) intact. For example, your iteration for the reciprocal y --> y*(2-x*y) should be y --> y + y*(1-x*y) See p.567ff of the fxtbook.
Anyway, I'm looking:
1. to include more algorithms that are known to work well for computing special functions and whatever to high precision. 2. to include improvements to what I have at the moment, algorithmically. Is the way I compute sine slow? Are there faster ways?
The "rectangular method" (p.658ff) might be something to check out: while its asymptotics isn't that fantastic it works surprisingly good (fast!) in practice. Binary splitting (p.651ff) can be an option if you are able to work with integers.
[...]
Thanks all!
Robert
regards, jj
participants (3)
-
Henry Baker -
Joerg Arndt -
quad