Re: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
Kahan on that approach: https://people.eecs.berkeley.edu/~wkahan/ (scroll down to the bottom)
Subject: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
FYI --
https://www.youtube.com/watch?v=aP0Y1uAA-2Y
Stanford Seminar: Beyond Floating Point: Next Generation Computer Arithmetic
John L. Gustafson, Natl Univ of Singapore
https://web.stanford.edu/class/ee380/Abstracts/170201.html
A new data type called a "posit" is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no "Not-a-Number" (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware reso urces. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.
A series of comprehensive benchmarks compares how many decimals of accuracy can be produced for a set number of bits-per-value, using various number formats. Low-precision posits provide a better solution than "approximate computing" methods that try to tolerate decreases in answer quality. High-precision posits provide better answers (more correct decimals) than floats of the same size, suggesting that in some cases, a 32-bit posit may do a better job than a 64-bit float. In other words, posits beat floats at their own game.
On Sat, Mar 25, 2017 at 2:34 PM, Axel Vogt <mail@axelvogt.de> wrote:
Kahan on that approach: https://people.eecs.berkeley.edu/~wkahan/ (scroll down to the bottom)
The debate between Gustafson and Kahan was about Unums, a different representation (variable-length), and SORN arithmetic. Posits are a newer invention (2017) with other objectives. Leo
Kahan should tell us what he really feels . . . On Mar 25, 2017 2:35 PM, "Axel Vogt" <mail@axelvogt.de> wrote:
Kahan on that approach: https://people.eecs.berkeley.edu/~wkahan/ (scroll down to the bottom)
Subject: [math-fun] Beyond Floats: Next Gen Computer Arithmetic
FYI --
https://www.youtube.com/watch?v=aP0Y1uAA-2Y
Stanford Seminar: Beyond Floating Point: Next Generation Computer Arithmetic
John L. Gustafson, Natl Univ of Singapore
https://web.stanford.edu/class/ee380/Abstracts/170201.html
A new data type called a "posit" is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no "Not-a-Number" (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware reso urces. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.
A series of comprehensive benchmarks compares how many decimals of accuracy can be produced for a set number of bits-per-value, using various number formats. Low-precision posits provide a better solution than "approximate computing" methods that try to tolerate decreases in answer quality. High-precision posits provide better answers (more correct decimals) than floats of the same size, suggesting that in some cases, a 32-bit posit may do a better job than a 64-bit float. In other words, posits beat floats at their own game.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (3)
-
Axel Vogt -
Leo Broukhis -
Tomas Rokicki