From: Joerg Arndt <arndt@jjj.de>
You can generate ordinary integers in order, along with their prime factorizations, using a priority-queue-like scheme.
"Lenstra's Algorithm". By default doesn't give you the factorisations, but that part is trivial post facto.
Dijkstra's algorithm (your follow-up). Clueless me: <cite http://en.wikipedia.org/wiki/Dijkstra's_algorithm> Dijkstra's algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an algorithm that solves the single-source shortest path problem for a directed graph with nonnegative edge weights. </cite> I do not see the connection, or is there other "Dijkstra's algorithm"s?
Given my brain presently, it's entirely possible that it was neither Dijkstra nor Lenstra! It's a fresher algorithmics exercise, normally done with primes {2,3,5}. Seed the list with { 1 }, and point a 2-pointer, 3-pointer and 5-pointer at it. 1 ^^^ 235 Then compare the value of 2*[2-pointer], 3*[3-pointer] and 5*[5-pointer]. Select the smallest, append to the list, and advance the appropriate pointer: 1, 2 ^^ ^ 35 2 Now compare 4,3,5 select the smallest, append, update 1, 2, 3 ^ ^^ 5 23 Now compare 4,6,5, select the smallest, append, update 1, 2, 3, 4 ^ ^ ^ 5 3 2 Now compare 6,6,5 1, 2, 3, 4, 5, ^^ ^ 35 2 Now compare 6,6,10. Here we need to include some arbitration to handle the equality of 2*3 and 3*2. One can simply advance both pointers. 1, 2, 3, 4, 5, 6 ^ ^ ^ 5 3 2 Now compare 8,9,10 As you can see, the list grows faster than the pointers advance, which means that this algorithm is impractical for really huge tasks.
A similar method should work for X+i, using complex primes. Unclear if it's any faster than just sieving large blocks of X+i.
The set { x^2+1 } is so sparse that sieving just along that polynomial will be fairly fast. O(R^(1/2)).
Both methods are faster than factoring N (or X^2+1) one-at-a-time.
But using Euler's factoring shortcut (the original block-GCD method), factoring is exceptionally fast.
Can you give me a reference?
Yup - but first $mypreviouspost =~ s/Euler/Euclid/; :-| Riesel PNaCMfF. Precompute the product of all the primes you are interested in, including conservative multiplicities. Then you just need to see if gcd(testnum,product)==testnum, or quicker and more simply product%testnum==0. However, due to both of the above missing extreme powers the slower test gcd(testnum,product)*primemax>testnum could be used.
I definitely want to learn about the method. How long does it take for p<=761 and (10^7)^2? Here are my figures: mm=10^6; \\ 10^6 == 13 sec \\ 4620 mm=10^7; \\ 10^7 == 131 sec \\ 8169 mm=10^10; \\ 10^10 == \\ 37h
Share that one-liner?
No, problem. 'taint no rocket science. Precomputation of product: ? pr=2^9;forprime(p=3,761,if(p%4==1,pp=p;while(pp*p<761,pp*=p);pr*=pp)) Naive GCD: ? c=0;for(x=1,10^6,x2p1=x^2+1;if(gcd(pr,x2p1)==x2p1,print(x" "x2p1);c++));c time = 5,490 ms. 3919 Shortcut: ? for(x=1,10^6,x2p1=x^2+1;if(pr%x2p1,,print(x" "x2p1))) time = 3,610 ms. 3919 Catching higher multiplicities: ? c=0;for(x=1,10^6,x2p1=x^2+1;if(gcd(pr,x2p1)*768>=x2p1,print(x" "x2p1);c++));c time = 5,580 ms. 4568 Of course, one could simply crank up the powers in the prime product to catch some of the more extreme ones (every prime has one higher exponent here): ? pr=2^10;forprime(p=3,761,if(p%4==1,pp=p;while(pp<761,pp*=p);pr*=pp)) ? c=0;for(x=1,10^6,x2p1=x^2+1;if(pr%x2p1,,print(x" "x2p1);c++));c time = 4,580 ms. 4563 Combining both safety nets: ? c=0;for(x=1,10^6,x2p1=x^2+1;if(gcd(pr,x2p1)*768>=x2p1,print(x" "x2p1);c++));c time = 6,510 ms. 4618 All times on a 2GHz Athlon.
Both methods adapt to ignoring large prime divisors, just keeping those up to some limit.
Lenstra's method is defined in those terms. Alas it's something like O(f(R)*#p) for f(R)=o(R) by default - the larger your set, the more collisions you have, and the more time you spend hopping down lists trying to find the right place to insert values. So it tends to slow down for large sets of divisors, and may take a while before the smoothness-contraint causes the unspecified o(R) to become preferable to the square+1 constraint's O(R^(1/2)).
I have a 'Lenstra Engine' somewhere - if someone wants me to perform some actual hunts for smooth numbers with it, just let me know the details, and I'll get a spare machine onto it immmediately.
Here!
I can mail, or stick on my website, or both, the source. It seems to go bang and run out of memory for the kinds of huge jobs you're looking at.
Another one I though about: For all small primes P compute the square roots of minus one mod P, mod P^2, mod P^3, etc. Do sieving as follows:
Multiplication is barely slower than addition nowadays, so you might as well just accumulate the full values, and not bother with logs. Sure, RAM is thrashed more, but that just means you need to optimise memory access rather than optimise maths. Whoever mentioned the divide and conquer method ('twas Richard, I'm sure) had the right solution for large jobs. DJB's sortedsums has the core code required. Phil () ASCII ribbon campaign () Hopeless ribbon campaign /\ against HTML mail /\ against gratuitous bloodshed [stolen with permission from Daniel B. Cristofani] __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com