(restating the obvious) Since factoring is so much harder than radix conversion, the choice of representation doesn't matter much. With one curious exception: If the representation of a number ends in 0, the radix is a divisor. A composite number PQ will have two special bases where the factorization is obvious. Unfortunately, finding these bases is just as hard as the original factoring problem. Rich -----Original Message----- From: math-fun-bounces+rschroe=sandia.gov@mailman.xmission.com on behalf of dasimov@earthlink.net Sent: Mon 4/10/2006 5:22 PM To: math-fun Subject: [math-fun] "Reverse factoring" complexity of addition I used to wonder why no one ever stated explicitly just *how* a number is respresented when a factoring algorithm is applied to it, in terms of saying how fast the algorithm is. But it's pretty clear that the input could be in base 10, or 2, or what have you, and at least for a long range of bases -- a factoring algorithm's complexity would vary very little. ---------------------------------------------------------------------------------------------------- Now I'm wondering about a sort of reverse problem: Suppose you're given two numbers solely in terms of their prime factorizations, i.e., the prime powers, in order, up to the last nonzero power: K = (d_1, d_2, . . . , d_r) (= 2^d_1 * 3^d_2 * . . . * p_r^d_r) and L = (e_1, e_2, . . . , e_s) (likewise). Now: What's the fastest way to compute the prime factorization of K + L ? Obviously, one way is to represent K and L to some number base, add, then factor the result. Is there a faster short cut? (Or could a judicious choice of number base make a big difference?) An obvious first step would be to exploit distributivity by factoring out G=gcd(K,L) and working with K' = K/G and L' = L/G. So WLOG assume (K,L) = 1. (And maybe assume further that K and L are squarefree.) --Dan _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun