These are "Binary Quadratic Forms". There's a whole theory. Gauss did a lot in Disquisitiones Arithmeticae (sp?), building on the previous work of Euler, Fermat, and others going back to at least the middle ages. The basic identity (a^2+b^2) (c^2+d^2) = (ac-bd)^2 + (ad+bc)^2 is ancient: I think it may go back to Diophantus. It's also the basis of multiplying complex numbers -- Euler's discovery of the 4-term version (credit to E?) was part of the Four Squares theorem, and then quaternions. There's an 8-term version (octonions) etc. Oversimplifying, the BQFs with the same discriminant form a commutative group with the operation of Composition. The form x^2 + K y^2 is the identity, and other forms L x^2 + M Y^2 with KL=M are part of the group. Forms that are interconvertible with a 2x2 matrix transform of determinant +1 are equivalent, and sometimes -1 is also allowed as a determinant. It gets more complicated when nonzero middle terms are included, especially with odd coefficients, like Fred's Eisenstein example. Composition sometimes just multiplies the coefficients of X, and fixes up the Y coefficient of the result to have the same determinant, but it can be more complicated, sometimes canceling common factors in the X coefficients. The identities (a^2 + LM b^2) (L c^2 + M d^2) = L (ac - M bd)^2 + M ( ad + L bc)^2 and (L a^2 + M b^2) (L c^2 + M d^2) = (L ac - M bd)^2 + LM (ad + bc)^2 lead to a pure three-term multiplication identity for L x^2 + M y^2 forms. There's probably a pretty identity of the shape [L MN] [M LN] [N LM] = [1 LMN]. There's lots more: The x^2+6y^2 and 2x^2+3y^2 forms make a group of order 2, as do the x^2+5y^2 and 2x^2+2xy+3y^2 forms. x^2+xy+6y^2 and 2x^2+xy+3y^2 and 2x^2-xy+3y^2 make a group of order 3. This leads into Class Numbers, Ideals, and (Non-)Unique Factorization. Forms of positive versus negative determinants have somewhat different flavors. Rich ----- Quoting Fred Lunnon <fred.lunnon@gmail.com>:
Re question (C) --- Pedestrian number-crunching and naïve guesswork yields polynomial identity B(x_1, y_1) B(x_2, y_2) = B(x_3, y_3) , where define B(x, y) = x^2 + x y + y^2 , (*) x_3 = x_1 x_2 + x_1 y_2 + y_1 y_2 , y_3 = x_2 y_1 - x_1 y_2 , which permits construction of representations of composites from factors for the Eisenstein case.
Which generalises neatly to where define B(x, y) = x^2 + g x y + h y^2 , (**) x_3 = x_1 x_2 + g x_1 y_2 + h y_1 y_2 , y_3 = x_2 y_1 - x_1 y_2 , subsuming both Eisenstein and classical sum-of-two-squares cases.
However full generality, where define B(x, y) = f x^2 + g x y + h y^2 , (***) remains elusive. Ring bells with anybody?
WFL
On 4/17/18, Fred Lunnon <fred.lunnon@gmail.com> wrote:
Re question (B) --- Vectors (x, y) = (w, 1), (p, 0) both satisfy f(x, y) = 0 (mod p) , and define a parallelogram with area = determinant = p ; similarly for all equivalent bases of the lattice they generate. I expect that the minimal basis in the positive quadrant (when appropriate) must have vectors sufficiently short for their norms to inceed p ; also that such a basis is unique.
WFL
On 4/16/18, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I spent most of the weekend failing to adapt Will Jagy's algorithm, encouraged by a delusion that I understood better why that should work. Once that notion had been dispelled, Victor's method succeeded at once, and generalises to an arbitrary quadratic form as follows.
(A) To represent a prime number p by a quadratic form f(x, y) : (1) Solve quadratic equation f(w, 1) = 0 over |F_p ; (2) Reduce the lattice basis { [w, 1], [p, 0] } to a minimal basis; (3) Now either reduced vector yields [s, t] such that f(s, t) = p .
The second quadratic root actually applies to the conjugate form, so that signs in step (3) may be incorrect. Efficient polynomial-time algorithms are eg. Tonelli-Shanks for step (1), and enhanced Euclidean for step (2). However, one's favourite CAS these days conceals such drudgery behind clean facades, such as Magma's Sqrt() and LLL() . Useful test info is available at http://oeis.org/wiki/User:Peter_Luschny/BinaryQuadraticForms
There remain two supplementary questions. (B) Exactly why does method (A) work? (C) How do representations of prime factors combine into representations of a composite target?
All these seem bread-and-butter questions to my innocent mind, and it is mystifying that they are not part-and-parcel of standard treatments. Or are they?
Fred Lunnon
On 4/14/18, Victor Miller <victorsmiller@gmail.com> wrote:
Yes, See my answer to the question about the sum of two squares:
https://mathoverflow.net/questions/70208/system-of-diophantine-equations?rq=...
For your problem you need to find a primitive 6th root of unity (really just finding sqrt(-3)), then set up the lattice and apply LLL.
Fred Lunnon <fred.lunnon@gmail.com> wrote:
There exists an algorithm for finding integers m,n such that x = m^2 + n^2 , in amortised time proportional to (log x)^3 , where x is prime with x mod 4 = 1 . See Richard E. Crandall, Carl Pomerance "Prime Numbers: A Computational Perspective" Springer (2001), sect 2.3.2 pp 93--96 .
Is there an analogous efficient algorithm for x = m^2 + m n + n^2 ? See https://en.wikipedia.org/wiki/Eisenstein_integer https://oeis.org/A034017 , https://oeis.org/A000086
What about more general binary quadratic forms?
Fred Lunnon
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun