[math-fun] New? multiplicaiton of GF2 polynomials
Before log tables were invented, there were a couple of other tricks that were useful for multiplication. One way was to use trig identities like cosX * cosY = (cos(X+Y) + cos(X-Y))/2. Another was to use a table of triangular numbers, T(x) = x(x+1)/2 x * y = T(x+y) - T(x) - T(y) or Quarter-Squares, Q(N) = floor[N^2 / 4] x * y = Q(x+y) - Q(x-y). The key idea in each case is to use a table that's a function of one variable in a clever way to compute a two-variable result. (I'm skeptical about these actually being used much, since the required tables are still pretty big. But I've never actually looked over Mr. Fibonacci's shoulder.) I'm looking for an appropriate generalization for GF2 polynomial multiplication. (Multiplication of polynomials where the coefficients are reduced mod 2.) Each of the methods above uses division by 2, so some minor variation is necessary. There are about six different ways to do GF2 polynomial products. I'm wondering if the scheme below is new. Define F(X), a kind of T-surrogate, as follows. Represent X as a block of bits: X = t^3 + t^2 + 1 --> 1101. For each pair of 1 bits, accumulate the product (using xor), into a total. This total is F(X). In the example, the terms to be xored are 1000 * 1, 100 * 1, and 1000 * 100. The total is 101100. (Representing t^5 + t^3 + t^2.) Then it appears that X * Y = (X&Y)^2 + F(XvY) + F(X&Y) + F(X&~Y) + F(Y&~X). The logic operators & v ~ are to be used on right-justified bit patterns. The squaring operation is fast in GF2, simply spread out the bits, interleaving with 0s. (Squaring is a linear operator in GF2.) Has anyone seen anything like this? Rich
On Jun 26, 2006, at 5:41 PM, Schroeppel, Richard wrote:
or Quarter-Squares,
Q(N) = floor[N^2 / 4]
x * y = Q(x+y) - Q(x-y).
The key idea in each case is to use a table that's a function of one variable in a clever way to compute a two-variable result.
Essentially every analog computer ever built uses a quarter-square approach to computing products. It's pretty easy to build analog circuits that square, and quite difficult to do multiply in other ways. I believe even the mechanical servomechanism computers used this approach.
* Schroeppel, Richard <rschroe@sandia.gov> [Jun 27. 2006 12:35]:
[...]
The key idea in each case is to use a table that's a function of one variable in a clever way to compute a two-variable result.
(I'm skeptical about these actually being used much, since the required tables are still pretty big. But I've never actually looked over Mr. Fibonacci's shoulder.)
I'm looking for an appropriate generalization for GF2 polynomial multiplication. (Multiplication of polynomials where the coefficients are reduced mod 2.) Each of the methods above uses division by 2, so some minor variation is necessary. There are about six different ways to do GF2 polynomial products. I'm wondering if the scheme below is new.
I thought "square to multiply" is impossible for GF2 polynomials... Could you give >=1 example of any such technique?
Define F(X), a kind of T-surrogate, as follows. Represent X as a block of bits: X = t^3 + t^2 + 1 --> 1101. For each pair of 1 bits, accumulate the product (using xor), into a total. This total is F(X). In the example, the terms to be xored are 1000 * 1, 100 * 1, and 1000 * 100. The total is 101100. (Representing t^5 + t^3 + t^2.) Then it appears that
X * Y = (X&Y)^2 + F(XvY) + F(X&Y) + F(X&~Y) + F(Y&~X).
The logic operators & v ~ are to be used on right-justified bit patterns. The squaring operation is fast in GF2, simply spread out the bits, interleaving with 0s. (Squaring is a linear operator in GF2.)
Has anyone seen anything like this?
No, and it looks surprising to me!
I created a misimpression in my note about multiplying GF2 polynomials. I juxtaposed the background of Triangles and Quarter-Squares with my comments about "six different ways to do GF2 polynomial products". I'm in agreement with Joerg: I don't know of a "square to multiply" method for GF2 polynomials, although I haven't concluded it's impossible. The F() function is the closest thing I've seen. The "six different ways" I had in mind are a potpourri of fairly well known techniques: Karatsuba-Ofman & Strassen-Schonhage & FFT, subfields, a couple of variations on Chinese remainder, log/exp tables, and the ever popular "schoolbook" method. The last, which simply calculates all the partial bit products and accumulates them, has myriad variations which are all mathematically equivalent, but the programming varies in a lot of ways, with significant performance differences (and machine dependencies.) One of the more obscure methods is to spread out the GF2 bits in a larger word and use the regular integer or floating point multiplication, and mask off the non-GF2 "noise" in the answer. Another way to represent polynomials is by giving the values at enough points. To multiply them, extrapolate/interpolate to get more values, then multiply the corresponding values. One could write a book, or perhaps a thesis, on the subject. I would be completely unsurprised if the F() method appears in the Bell System Technical Journal in 1955. Rich -----Original Message----- From: Joerg Arndt Sent: Thu 6/29/2006 9:46 AM To: math-fun Subject: Re: [math-fun] New? multiplicaiton of GF2 polynomials rcs> [clip]
I'm looking for an appropriate generalization for GF2 polynomial multiplication. (Multiplication of polynomials where the coefficients are reduced mod 2.) Each of the methods above uses division by 2, so some minor variation is necessary. There are about six different ways to do GF2 polynomial products. I'm wondering if the scheme below is new.
I thought "square to multiply" is impossible for GF2 polynomials... Could you give >=1 example of any such technique? [clip]
--- "Schroeppel, Richard" <rschroe@sandia.gov> wrote: ...
Another way to represent polynomials is by giving the values at enough points. ...
But for polynomials over GF2, the only possible values are 0 and 1. Gene __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
To make this work for GF(2), you have to reinterpret them as polynomials over GF(2^k) for some sufficiently large k. Franklin T. Adams-Watters -----Original Message----- From: Eugene Salamin <gene_salamin@yahoo.com> --- "Schroeppel, Richard" <rschroe@sandia.gov> wrote: ...
Another way to represent polynomials is by giving the values at enough points. ...
But for polynomials over GF2, the only possible values are 0 and 1. Gene
* franktaw@netscape.net <franktaw@netscape.net> [Jun 30. 2006 11:15]:
To make this work for GF(2), you have to reinterpret them as polynomials over GF(2^k) for some sufficiently large k.
Franklin T. Adams-Watters
But then you have orders that are factors of 2^k-1, making it hard to have a fast transform. I wouldn't be surprised is float-FFT beats all other schemes for large enough degree. Exercise 8.30 on p.250 of "Modern Computer Algebra" by J. von zur Gathen and Juergen Gerhard (wonderful book!) suggests to use modulus x^(2n)+x^n+1 for n=3^k (ascribed to Schoenhage, 1977).
participants (5)
-
Eugene Salamin -
franktaw@netscape.net -
Joerg Arndt -
Schroeppel, Richard -
Tom Knight