[math-fun] Landau's function of 2^16
I got an email from Claus Michael Ringel at the University of Bielefeld, asking me if I knew a reasonable way to evaluate Landau's function (A000793) at 2^16 = 65536. This is the maximum least common multiple of any partition of 65536; equivalently it is the maximum order of any element of Sym(65536). I am sure there are lots of shortcuts, that save one from enumerating all the partitions of 65536. Can anyone suggest one? (Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
See http://math.univ-lyon1.fr/~deleglis/PDF/dnz4.pdf and especially the paper [19] referred to there. On 11/12/16 6:50 PM, Allan Wechsler wrote:
I got an email from Claus Michael Ringel at the University of Bielefeld, asking me if I knew a reasonable way to evaluate Landau's function (A000793) at 2^16 = 65536.
This is the maximum least common multiple of any partition of 65536; equivalently it is the maximum order of any element of Sym(65536).
I am sure there are lots of shortcuts, that save one from enumerating all the partitions of 65536. Can anyone suggest one?
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.) _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Thank you, Jeffrey; I will forward your suggestion to Claus. On Sat, Nov 12, 2016 at 7:14 PM, Jeffrey Shallit <shallit@uwaterloo.ca> wrote:
See
http://math.univ-lyon1.fr/~deleglis/PDF/dnz4.pdf
and especially the paper [19] referred to there.
On 11/12/16 6:50 PM, Allan Wechsler wrote:
I got an email from Claus Michael Ringel at the University of Bielefeld, asking me if I knew a reasonable way to evaluate Landau's function (A000793) at 2^16 = 65536.
This is the maximum least common multiple of any partition of 65536; equivalently it is the maximum order of any element of Sym(65536).
I am sure there are lots of shortcuts, that save one from enumerating all the partitions of 65536. Can anyone suggest one?
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.) _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793 Is this correct? The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later. Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older! WFL On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
This is a pretty common programming contest problem. You can make an additional restriction as follows: Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.) Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward. You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm. This is not considered a difficult problem for modern college programming contests. On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
Perhaps this code will be simple enough for most to follow. Note that I use doubles and don't report the actual partition. This is trivial to do (just add an array storing the last prime power that improved a result, and then walk backwards from there). This code easily handles 65536 and gives 2.41074e+388. Running it in Python (with its superior number stack) should give you an exact result pretty quickly; I can provide that if anyone is interested. for (int i=0; i<N; i++) maxv[i] = 1 ; for (int i=2; i<=N; i++) { if (!isprime(i)) continue ; for (int j=N-1; j>=i; j--) { long double hi = maxv[j] ; for (long long pp=i; pp<=j; pp=pp*i) hi = max(hi, maxv[j-pp]*pp) ; maxv[j] = hi ; } } for (int i=0; i<N; i++) cout << i << " " << maxv[i] << endl ; On Sun, Nov 13, 2016 at 9:09 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
This is a pretty common programming contest problem. You can make an additional restriction as follows:
Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.)
Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward.
You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm.
This is not considered a difficult problem for modern college programming contests.
On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
It looks like this code depends on a lemma whose truth is not obvious to me. "For any N, there exists a prime power p^k such that g(N) = g(N-p^k) * p^k." It is conceivable to me that for any component p^k of g(N), g(N-p^k) might not be relatively prime to p^k. That is, it's conceivable that none of the cofactors of prime powers in g(N) are themselves optimal in the Landau sense. I admit that it doesn't seem _likely_, because you would have to have bad luck for every component p^k in g(N). But still conceivable. Am I missing an easy demonstration of this lemma? On Sun, Nov 13, 2016 at 12:34 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Perhaps this code will be simple enough for most to follow.
Note that I use doubles and don't report the actual partition. This is trivial to do (just add an array storing the last prime power that improved a result, and then walk backwards from there).
This code easily handles 65536 and gives 2.41074e+388. Running it in Python (with its superior number stack) should give you an exact result pretty quickly; I can provide that if anyone is interested.
for (int i=0; i<N; i++) maxv[i] = 1 ; for (int i=2; i<=N; i++) { if (!isprime(i)) continue ; for (int j=N-1; j>=i; j--) { long double hi = maxv[j] ; for (long long pp=i; pp<=j; pp=pp*i) hi = max(hi, maxv[j-pp]*pp) ; maxv[j] = hi ; } } for (int i=0; i<N; i++) cout << i << " " << maxv[i] << endl ;
On Sun, Nov 13, 2016 at 9:09 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
This is a pretty common programming contest problem. You can make an additional restriction as follows:
Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.)
Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward.
You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm.
This is not considered a difficult problem for modern college programming contests.
On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Ooh, even worse: suppose N = p^k + M, and g(M) happens to be divisible by p^k? Then the product p^k*g(M) would look like a really good candidate for g(N), but would in fact have a spurious factor of p^k. On Sun, Nov 13, 2016 at 5:15 PM, Allan Wechsler <acwacw@gmail.com> wrote:
It looks like this code depends on a lemma whose truth is not obvious to me.
"For any N, there exists a prime power p^k such that g(N) = g(N-p^k) * p^k."
It is conceivable to me that for any component p^k of g(N), g(N-p^k) might not be relatively prime to p^k. That is, it's conceivable that none of the cofactors of prime powers in g(N) are themselves optimal in the Landau sense. I admit that it doesn't seem _likely_, because you would have to have bad luck for every component p^k in g(N). But still conceivable. Am I missing an easy demonstration of this lemma?
On Sun, Nov 13, 2016 at 12:34 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Perhaps this code will be simple enough for most to follow.
Note that I use doubles and don't report the actual partition. This is trivial to do (just add an array storing the last prime power that improved a result, and then walk backwards from there).
This code easily handles 65536 and gives 2.41074e+388. Running it in Python (with its superior number stack) should give you an exact result pretty quickly; I can provide that if anyone is interested.
for (int i=0; i<N; i++) maxv[i] = 1 ; for (int i=2; i<=N; i++) { if (!isprime(i)) continue ; for (int j=N-1; j>=i; j--) { long double hi = maxv[j] ; for (long long pp=i; pp<=j; pp=pp*i) hi = max(hi, maxv[j-pp]*pp) ; maxv[j] = hi ; } } for (int i=0; i<N; i++) cout << i << " " << maxv[i] << endl ;
On Sun, Nov 13, 2016 at 9:09 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
This is a pretty common programming contest problem. You can make an additional restriction as follows:
Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.)
Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward.
You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm.
This is not considered a difficult problem for modern college programming contests.
On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Read the code carefully. On Nov 13, 2016 2:17 PM, "Allan Wechsler" <acwacw@gmail.com> wrote:
Ooh, even worse: suppose N = p^k + M, and g(M) happens to be divisible by p^k? Then the product p^k*g(M) would look like a really good candidate for g(N), but would in fact have a spurious factor of p^k.
On Sun, Nov 13, 2016 at 5:15 PM, Allan Wechsler <acwacw@gmail.com> wrote:
It looks like this code depends on a lemma whose truth is not obvious to me.
"For any N, there exists a prime power p^k such that g(N) = g(N-p^k) * p^k."
It is conceivable to me that for any component p^k of g(N), g(N-p^k) might not be relatively prime to p^k. That is, it's conceivable that none of the cofactors of prime powers in g(N) are themselves optimal in the Landau sense. I admit that it doesn't seem _likely_, because you would have to have bad luck for every component p^k in g(N). But still conceivable. Am I missing an easy demonstration of this lemma?
On Sun, Nov 13, 2016 at 12:34 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Perhaps this code will be simple enough for most to follow.
Note that I use doubles and don't report the actual partition. This is trivial to do (just add an array storing the last prime power that improved a result, and then walk backwards from there).
This code easily handles 65536 and gives 2.41074e+388. Running it in Python (with its superior number stack) should give you an exact result pretty quickly; I can provide that if anyone is interested.
for (int i=0; i<N; i++) maxv[i] = 1 ; for (int i=2; i<=N; i++) { if (!isprime(i)) continue ; for (int j=N-1; j>=i; j--) { long double hi = maxv[j] ; for (long long pp=i; pp<=j; pp=pp*i) hi = max(hi, maxv[j-pp]*pp) ; maxv[j] = hi ; } } for (int i=0; i<N; i++) cout << i << " " << maxv[i] << endl ;
On Sun, Nov 13, 2016 at 9:09 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
This is a pretty common programming contest problem. You can make an additional restriction as follows:
Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.)
Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward.
You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm.
This is not considered a difficult problem for modern college programming contests.
On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
> The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
> Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]: > [...] > > (Of all the code snippets in the relevant page at A000793, all of them > except one are clearly just enumerating partitions and comparing LCM's. > But > Charles Greathouse's PARI code is not doing anything like that, and I > don't > have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
> _______________________________________________ > math-fun mailing list > math-fun@mailman.xmission.com > https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Here is a Pari/GP version (omitting a(0)): -------------------------- { \\ compute all terms for n = 1 .. N my( N = 2^16 ); my( V = vector(N,j,1) ); forprime (i=2, N, \\ primes i forstep (j=N, i, -1, my( hi = V[j] ); my( pp = i ); \\ powers of prime i while( pp<=j, \\ V[] is 1-based if ( j-pp == 0, hi = max(hi, 1 * pp) ; , /* else */ hi = max(hi, V[j-pp]*pp) ); pp *= i; ); V[j] = hi; ); ); \\ print( V ); \\ to print all print( V[#V] ); \\ to print just a(N) } \\ translated from code given by Tomas Rokicki, _Joerg Arndt_, Nov 14 2016 -------------------------- Result a(2^16) 24107436976929288463522986071862342451920273240564393170920449418199550175195545786706048248734708656618724145241901778583426581063609762611912979337523197154614443953724052978571267712375617172897001141282081020394958233219985347683711985830591320971431658126730141107602836360947686955805860563071876389913551303435518154291492315669358080207105703700010214254223139787608931762989456000 time = 3min, 42,412 ms. The largest prime factor is 919, and primorial(919) / a(2^16) = 2^-6 * 3^-3 * 5^-2 * 7^-1 * 11^-1 * 13^-1 * 17^-1 * 907^+1 Is it OK that the result "skips" the prime 907? Best regards, jj * Tomas Rokicki <rokicki@gmail.com> [Nov 14. 2016 06:21]:
Perhaps this code will be simple enough for most to follow.
Note that I use doubles and don't report the actual partition. This is trivial to do (just add an array storing the last prime power that improved a result, and then walk backwards from there).
This code easily handles 65536 and gives 2.41074e+388. Running it in Python (with its superior number stack) should give you an exact result pretty quickly; I can provide that if anyone is interested.
for (int i=0; i<N; i++) maxv[i] = 1 ; for (int i=2; i<=N; i++) { if (!isprime(i)) continue ; for (int j=N-1; j>=i; j--) { long double hi = maxv[j] ; for (long long pp=i; pp<=j; pp=pp*i) hi = max(hi, maxv[j-pp]*pp) ; maxv[j] = hi ; } } for (int i=0; i<N; i++) cout << i << " " << maxv[i] << endl ;
On Sun, Nov 13, 2016 at 9:09 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
This is a pretty common programming contest problem. You can make an additional restriction as follows:
Every factor (except perhaps the last) should be a power of a prime. (If a factor is composite, you can factor it into prime powers and get equivalent impact with fewer base elements.)
Then, you can solve this problem using dynamic programming. First you solve it for n=1..65536 using only powers of 2. Then based on that you solve it for n=1..65536 using powers of 2 and powers of 3. Then you extend that with powers of 5, and so on and so forth; pretty straightforward.
You do need to use a programming language or library that supports bigints, or else you can represent the product by its logarithm.
This is not considered a difficult problem for modern college programming contests.
On Sun, Nov 13, 2016 at 4:02 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
I'm suffering some disconnection here: compare JA below
The upper bound used for the downward search is truly enormous for n = 2^16.
with https://oeis.org/draft/A000793 comments
Grantham mentions that he computed a(n) for n <= 500000.
--- citing a 1995 reference given later.
Yet 2^16 = 65536 < 500000 , and JG's hardware 20 years older!
WFL
On 11/13/16, Joerg Arndt <arndt@jjj.de> wrote:
* Allan Wechsler <acwacw@gmail.com> [Nov 13. 2016 08:33]:
[...]
(Of all the code snippets in the relevant page at A000793, all of them except one are clearly just enumerating partitions and comparing LCM's. But Charles Greathouse's PARI code is not doing anything like that, and I don't have a clue what it *is* doing.)
Charles' code seems to use what I just entered as a formula in A000793: For n>=2, A000793(n) = max_{k} A008475(k) <= n. Cf. https://oeis.org/draft/A000793
Is this correct?
The upper bound used for the downward search is truly enormous for n = 2^16.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Computation of the values for n <= 10^6 took 15 hours. The resulting file is 1,061.3 MiB, and 182.7 MiB after compression using xz -9 Should I put it online? Or even better: does anybody have/know a web server where this file could be offered? Best regards, jj * Joerg Arndt <arndt@jjj.de> [Nov 14. 2016 12:16]:
Here is a Pari/GP version (omitting a(0)): -------------------------- { \\ compute all terms for n = 1 .. N my( N = 2^16 ); my( V = vector(N,j,1) ); forprime (i=2, N, \\ primes i forstep (j=N, i, -1, my( hi = V[j] ); my( pp = i ); \\ powers of prime i while( pp<=j, \\ V[] is 1-based if ( j-pp == 0, hi = max(hi, 1 * pp) ; , /* else */ hi = max(hi, V[j-pp]*pp) ); pp *= i; ); V[j] = hi; ); ); \\ print( V ); \\ to print all print( V[#V] ); \\ to print just a(N) } \\ translated from code given by Tomas Rokicki, _Joerg Arndt_, Nov 14 2016 --------------------------
[...]
I've got a faster version in Java that does 1e6 in 9m30s. I think it can go much further, but the file gets too big. (It uses doubles and sums of logs to approximate the value, and only checks the bigints if the doubles seem to say the new value is close to or greater than the old value.) A simple table of the last prime power factor would enable anyone to recalculate the result just by walking the table backwards---assuming the previous value is not later improved. But this is easily checked. About 66% of the values (the full bigint values) are repeats of the previous value so omitting the duplicates can improve the size of the output. For instance, the values for 65531 through 65536 are all the same. Probably a table of last prime power factors coupled with a bit of JavaScript in a web page to do the bigint arithmetic would be most useful . . . On Tue, Nov 15, 2016 at 4:23 AM, Joerg Arndt <arndt@jjj.de> wrote:
Computation of the values for n <= 10^6 took 15 hours. The resulting file is 1,061.3 MiB, and 182.7 MiB after compression using xz -9
Should I put it online? Or even better: does anybody have/know a web server where this file could be offered?
Best regards, jj
* Joerg Arndt <arndt@jjj.de> [Nov 14. 2016 12:16]:
Here is a Pari/GP version (omitting a(0)): -------------------------- { \\ compute all terms for n = 1 .. N my( N = 2^16 ); my( V = vector(N,j,1) ); forprime (i=2, N, \\ primes i forstep (j=N, i, -1, my( hi = V[j] ); my( pp = i ); \\ powers of prime i while( pp<=j, \\ V[] is 1-based if ( j-pp == 0, hi = max(hi, 1 * pp) ; , /* else */ hi = max(hi, V[j-pp]*pp) ); pp *= i; ); V[j] = hi; ); ); \\ print( V ); \\ to print all print( V[#V] ); \\ to print just a(N) } \\ translated from code given by Tomas Rokicki, _Joerg Arndt_, Nov 14 2016 --------------------------
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
Nope, just using the "last factor" doesn't work. But I bet we can come up with something that does work . . . On Tue, Nov 15, 2016 at 7:53 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
I've got a faster version in Java that does 1e6 in 9m30s. I think it can go much further, but the file gets too big. (It uses doubles and sums of logs to approximate the value, and only checks the bigints if the doubles seem to say the new value is close to or greater than the old value.)
A simple table of the last prime power factor would enable anyone to recalculate the result just by walking the table backwards---assuming the previous value is not later improved. But this is easily checked.
About 66% of the values (the full bigint values) are repeats of the previous value so omitting the duplicates can improve the size of the output. For instance, the values for 65531 through 65536 are all the same.
Probably a table of last prime power factors coupled with a bit of JavaScript in a web page to do the bigint arithmetic would be most useful . . .
On Tue, Nov 15, 2016 at 4:23 AM, Joerg Arndt <arndt@jjj.de> wrote:
Computation of the values for n <= 10^6 took 15 hours. The resulting file is 1,061.3 MiB, and 182.7 MiB after compression using xz -9
Should I put it online? Or even better: does anybody have/know a web server where this file could be offered?
Best regards, jj
* Joerg Arndt <arndt@jjj.de> [Nov 14. 2016 12:16]:
Here is a Pari/GP version (omitting a(0)): -------------------------- { \\ compute all terms for n = 1 .. N my( N = 2^16 ); my( V = vector(N,j,1) ); forprime (i=2, N, \\ primes i forstep (j=N, i, -1, my( hi = V[j] ); my( pp = i ); \\ powers of prime i while( pp<=j, \\ V[] is 1-based if ( j-pp == 0, hi = max(hi, 1 * pp) ; , /* else */ hi = max(hi, V[j-pp]*pp) ); pp *= i; ); V[j] = hi; ); ); \\ print( V ); \\ to print all print( V[#V] ); \\ to print just a(N) } \\ translated from code given by Tomas Rokicki, _Joerg Arndt_, Nov 14 2016 --------------------------
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller. This could also give an interesting graph (tree). Best regards, jj * Tomas Rokicki <rokicki@gmail.com> [Nov 16. 2016 10:17]:
I've got a faster version in Java that does 1e6 in 9m30s. I think it can go much further, but the file gets too big. (It uses doubles and sums of logs to approximate the value, and only checks the bigints if the doubles seem to say the new value is close to or greater than the old value.)
A simple table of the last prime power factor would enable anyone to recalculate the result just by walking the table backwards---assuming the previous value is not later improved. But this is easily checked.
About 66% of the values (the full bigint values) are repeats of the previous value so omitting the duplicates can improve the size of the output. For instance, the values for 65531 through 65536 are all the same.
Probably a table of last prime power factors coupled with a bit of JavaScript in a web page to do the bigint arithmetic would be most useful . . .
On Tue, Nov 15, 2016 at 4:23 AM, Joerg Arndt <arndt@jjj.de> wrote:
Computation of the values for n <= 10^6 took 15 hours. The resulting file is 1,061.3 MiB, and 182.7 MiB after compression using xz -9
Should I put it online? Or even better: does anybody have/know a web server where this file could be offered?
Best regards, jj
* Joerg Arndt <arndt@jjj.de> [Nov 14. 2016 12:16]:
Here is a Pari/GP version (omitting a(0)): -------------------------- { \\ compute all terms for n = 1 .. N my( N = 2^16 ); my( V = vector(N,j,1) ); forprime (i=2, N, \\ primes i forstep (j=N, i, -1, my( hi = V[j] ); my( pp = i ); \\ powers of prime i while( pp<=j, \\ V[] is 1-based if ( j-pp == 0, hi = max(hi, 1 * pp) ; , /* else */ hi = max(hi, V[j-pp]*pp) ); pp *= i; ); V[j] = hi; ); ); \\ print( V ); \\ to print all print( V[#V] ); \\ to print just a(N) } \\ translated from code given by Tomas Rokicki, _Joerg Arndt_, Nov 14 2016 --------------------------
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n). It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller. The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings. Rich ---------- Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Representing each result by a row of the form 7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1 (for instance) meaning 2^3 3^4 5^3 7^2 11^2 ... and squeezing out duplicate rows yields something which gzip eats for lunch. The entire result set through 1M compresses down to 3.0MB. Bzip2 takes that down to 1.89MB. Giving just the ratio (expressed in this way for the positive factors, and as negative values for the negative factors) to the primordial of the highest prime will probably be even more effective. So the above row might be written 19 6 3 2 1 1 1 1 1 -18 which means the 19th primordial, times 2^6 * 3^3 ..., divided by the 18th prime. On Wed, Nov 16, 2016 at 3:25 PM, <rcs@xmission.com> wrote:
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings.
Rich
----------
Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
Tom's musing about how much a table of Landau's function could be compressed inspired me to scribble out the first quotients, that is, the ratios between successive Landau values. It seems like there aren't that many. Up through g(25), I see 1 (9 times), 2 (5 times), 3/2 (5 times), 4/3 (3 times), 5/4 (2 times), and 7/5 (1 time). That's only six values. How many ratios actually occur up through 1M? If it's smaller than 256, then we could encode each difference in two bytes; that would take us down to the neighborhood of 2 MB right there. And the ratios have very different rates of occurrence: using a Huffman encoding would probably push us past Tom's 1.89MB. In my hand-worked example up through g(25), 1, 2, and 3/2 would be coded with 2 bits, 4/3 with 3, and 5/4 and 7/5 with 4 bits. The body of the table, therefore, would be 59 bits long, a bit over 2 bits per entry on average. (Of course I haven't included the cost of conversion table, the one that says 00 -> 1, 01 -> 2, 10 -> 3/2, 110 -> 4/3, and so on. But for larger n this would be a tiny part of the content.) Tom, how hard would it be to repeat the arithmetic I just did on the million-entry table? On Wed, Nov 16, 2016 at 6:35 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Representing each result by a row of the form
7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1
(for instance) meaning 2^3 3^4 5^3 7^2 11^2 ...
and squeezing out duplicate rows yields something which gzip eats for lunch.
The entire result set through 1M compresses down to 3.0MB.
Bzip2 takes that down to 1.89MB.
Giving just the ratio (expressed in this way for the positive factors, and as negative values for the negative factors) to the primordial of the highest prime will probably be even more effective. So the above row might be written
19 6 3 2 1 1 1 1 1 -18
which means the 19th primordial, times 2^6 * 3^3 ..., divided by the 18th prime.
On Wed, Nov 16, 2016 at 3:25 PM, <rcs@xmission.com> wrote:
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings.
Rich
----------
Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Awesome idea! Let me rip through this. I'm currently working with a factored representation so this is pretty easy to answer, I believe. On Wed, Nov 16, 2016 at 5:07 PM, Allan Wechsler <acwacw@gmail.com> wrote:
Tom's musing about how much a table of Landau's function could be compressed inspired me to scribble out the first quotients, that is, the ratios between successive Landau values.
It seems like there aren't that many. Up through g(25), I see 1 (9 times), 2 (5 times), 3/2 (5 times), 4/3 (3 times), 5/4 (2 times), and 7/5 (1 time). That's only six values. How many ratios actually occur up through 1M? If it's smaller than 256, then we could encode each difference in two bytes; that would take us down to the neighborhood of 2 MB right there. And the ratios have very different rates of occurrence: using a Huffman encoding would probably push us past Tom's 1.89MB. In my hand-worked example up through g(25), 1, 2, and 3/2 would be coded with 2 bits, 4/3 with 3, and 5/4 and 7/5 with 4 bits. The body of the table, therefore, would be 59 bits long, a bit over 2 bits per entry on average. (Of course I haven't included the cost of conversion table, the one that says 00 -> 1, 01 -> 2, 10 -> 3/2, 110 -> 4/3, and so on. But for larger n this would be a tiny part of the content.)
Tom, how hard would it be to repeat the arithmetic I just did on the million-entry table?
On Wed, Nov 16, 2016 at 6:35 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Representing each result by a row of the form
7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1
(for instance) meaning 2^3 3^4 5^3 7^2 11^2 ...
and squeezing out duplicate rows yields something which gzip eats for lunch.
The entire result set through 1M compresses down to 3.0MB.
Bzip2 takes that down to 1.89MB.
Giving just the ratio (expressed in this way for the positive factors, and as negative values for the negative factors) to the primordial of the highest prime will probably be even more effective. So the above row might be written
19 6 3 2 1 1 1 1 1 -18
which means the 19th primordial, times 2^6 * 3^3 ..., divided by the 18th prime.
On Wed, Nov 16, 2016 at 3:25 PM, <rcs@xmission.com> wrote:
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings.
Rich
----------
Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
So you probably won't be shocked to know that the most common ratio is 1:1; this happens 584,970 times. But you might be surprised to know the *second* most common ratio! It is 3301/3299, and it occurs *2476* times. I see a total of 26,044 distinct ratios. Huffman encoding them will probably work fairly well but the dictionary itself will be somewhat large. The best I've seen so far is to simply print -n if the last prime is p_n and the number for g(n) is just g(n-p_n)*p_n; this happens 408,967 times out of the 415,031 unique values for g(n). For the rest, for now I just print the entire factorization. This compresses down (with gzip) to 1.155M. But I think we can do even better; I notice what's actually going on for the most part is patterns of 0's and 1's shifting at the end of numbers. These correspond to ratios such as 3301/3299 (essentially, this means take the rightmost prime and move it "up" one). I also see: take the leftmost 0 (unused prime) and move it "down" one (so p_i^1 p_{i+1}^0 internally becomes p_i^0 p_{i+1}^1). I definitely see repeating patterns both on the left (prefixes of values > 1) and on the right (zeros and ones). I bet I can get this down to less than a half byte per value . . . On Wed, Nov 16, 2016 at 5:13 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Awesome idea! Let me rip through this. I'm currently working with a factored representation so this is pretty easy to answer, I believe.
On Wed, Nov 16, 2016 at 5:07 PM, Allan Wechsler <acwacw@gmail.com> wrote:
Tom's musing about how much a table of Landau's function could be compressed inspired me to scribble out the first quotients, that is, the ratios between successive Landau values.
It seems like there aren't that many. Up through g(25), I see 1 (9 times), 2 (5 times), 3/2 (5 times), 4/3 (3 times), 5/4 (2 times), and 7/5 (1 time). That's only six values. How many ratios actually occur up through 1M? If it's smaller than 256, then we could encode each difference in two bytes; that would take us down to the neighborhood of 2 MB right there. And the ratios have very different rates of occurrence: using a Huffman encoding would probably push us past Tom's 1.89MB. In my hand-worked example up through g(25), 1, 2, and 3/2 would be coded with 2 bits, 4/3 with 3, and 5/4 and 7/5 with 4 bits. The body of the table, therefore, would be 59 bits long, a bit over 2 bits per entry on average. (Of course I haven't included the cost of conversion table, the one that says 00 -> 1, 01 -> 2, 10 -> 3/2, 110 -> 4/3, and so on. But for larger n this would be a tiny part of the content.)
Tom, how hard would it be to repeat the arithmetic I just did on the million-entry table?
On Wed, Nov 16, 2016 at 6:35 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Representing each result by a row of the form
7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1
(for instance) meaning 2^3 3^4 5^3 7^2 11^2 ...
and squeezing out duplicate rows yields something which gzip eats for lunch.
The entire result set through 1M compresses down to 3.0MB.
Bzip2 takes that down to 1.89MB.
Giving just the ratio (expressed in this way for the positive factors, and as negative values for the negative factors) to the primordial of the highest prime will probably be even more effective. So the above row might be written
19 6 3 2 1 1 1 1 1 -18
which means the 19th primordial, times 2^6 * 3^3 ..., divided by the 18th prime.
On Wed, Nov 16, 2016 at 3:25 PM, <rcs@xmission.com> wrote:
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings.
Rich
----------
Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
About "file gets too big": Is it true that every entry V[n] for n>=2 is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we could just store pp, making the file much smaller.
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some
entry
before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
This could also give an interesting graph (tree).
Best regards, jj
[...]
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
It feels like that is about the best you are going to be able to do without a seriously deep insight into the nature of the function. Though we start running into Kolmogorov/philosophical issues of what "compression" actually means. In a sense the program you used to produce the million values to start with is a "compression" of the table, with the Java compiler and JVM being the decompressor. The eponymous Marcus Hutter of the Hutter Prize ( https://en.wikipedia.org/wiki/Hutter_Prize) was clever about this: you have to submit a Linux or Windows executable that can run without additional input, which writes the target text on stdout (or equivalent). It's the size of the executable that they want to minimize. (There is some other lawyering in the rules as well.) On Wed, Nov 16, 2016 at 8:40 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
So you probably won't be shocked to know that the most common ratio is 1:1; this happens 584,970 times. But you might be surprised to know the *second* most common ratio! It is 3301/3299, and it occurs *2476* times. I see a total of 26,044 distinct ratios. Huffman encoding them will probably work fairly well but the dictionary itself will be somewhat large.
The best I've seen so far is to simply print -n if the last prime is p_n and the number for g(n) is just g(n-p_n)*p_n; this happens 408,967 times out of the 415,031 unique values for g(n). For the rest, for now I just print the entire factorization. This compresses down (with gzip) to 1.155M.
But I think we can do even better; I notice what's actually going on for the most part is patterns of 0's and 1's shifting at the end of numbers. These correspond to ratios such as 3301/3299 (essentially, this means take the rightmost prime and move it "up" one). I also see: take the leftmost 0 (unused prime) and move it "down" one (so p_i^1 p_{i+1}^0 internally becomes p_i^0 p_{i+1}^1).
I definitely see repeating patterns both on the left (prefixes of values > 1) and on the right (zeros and ones). I bet I can get this down to less than a half byte per value . . .
On Wed, Nov 16, 2016 at 5:13 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Awesome idea! Let me rip through this. I'm currently working with a factored representation so this is pretty easy to answer, I believe.
On Wed, Nov 16, 2016 at 5:07 PM, Allan Wechsler <acwacw@gmail.com> wrote:
Tom's musing about how much a table of Landau's function could be compressed inspired me to scribble out the first quotients, that is, the ratios between successive Landau values.
It seems like there aren't that many. Up through g(25), I see 1 (9 times), 2 (5 times), 3/2 (5 times), 4/3 (3 times), 5/4 (2 times), and 7/5 (1 time). That's only six values. How many ratios actually occur up through 1M? If it's smaller than 256, then we could encode each difference in two bytes; that would take us down to the neighborhood of 2 MB right there. And the ratios have very different rates of occurrence: using a Huffman encoding would probably push us past Tom's 1.89MB. In my hand-worked example up through g(25), 1, 2, and 3/2 would be coded with 2 bits, 4/3 with 3, and 5/4 and 7/5 with 4 bits. The body of the table, therefore, would be 59 bits long, a bit over 2 bits per entry on average. (Of course I haven't included the cost of conversion table, the one that says 00 -> 1, 01 -> 2, 10 -> 3/2, 110 -> 4/3, and so on. But for larger n this would be a tiny part of the content.)
Tom, how hard would it be to repeat the arithmetic I just did on the million-entry table?
On Wed, Nov 16, 2016 at 6:35 PM, Tomas Rokicki <rokicki@gmail.com> wrote:
Representing each result by a row of the form
7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1
(for instance) meaning 2^3 3^4 5^3 7^2 11^2 ...
and squeezing out duplicate rows yields something which gzip eats for lunch.
The entire result set through 1M compresses down to 3.0MB.
Bzip2 takes that down to 1.89MB.
Giving just the ratio (expressed in this way for the positive factors, and as negative values for the negative factors) to the primordial of the highest prime will probably be even more effective. So the above row might be written
19 6 3 2 1 1 1 1 1 -18
which means the 19th primordial, times 2^6 * 3^3 ..., divided by the 18th prime.
On Wed, Nov 16, 2016 at 3:25 PM, <rcs@xmission.com> wrote:
If the only concern is file compression (ignoring the interesting math question), then a single bit/row meaning "this row is compressed using the PrimePowerConjecture" or "this row is not compressed" would collect most of the savings.
Rich
----------
Quoting Joerg Arndt <arndt@jjj.de>:
* Joerg Arndt <arndt@jjj.de> [Nov 16. 2016 16:02]:
> About "file gets too big": Is it true that every entry V[n] for
n>=2
> is of the form V[n] = V[j-pp]*pp with pp a prime power? If so, we > could just store pp, making the file much smaller. >
Nope, at least assuming that pp must be a divisor of n (wrong for n = 29 and many larger n).
It seem possible to replace every entry with a reference to some entry before it and the ratio both have (the ratios do seem always prime). This way the file produced is significantly smaller.
The (slight) drawback is that one must pick a (small) selection of values in the table, multiplying the ratios. The selection is just by following the references.
> This could also give an interesting graph (tree). > > Best regards, jj > > [...] >
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
* Tomas Rokicki <rokicki@gmail.com> [Nov 17. 2016 16:47]:
Representing each result by a row of the form
7 4 3 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 0 1
(for instance) meaning 2^3 3^4 5^3 7^2 11^2 ...
and squeezing out duplicate rows yields something which gzip eats for lunch.
The entire result set through 1M compresses down to 3.0MB.
Bzip2 takes that down to 1.89MB.
[...]
My approach (back-ref and multiplier) gives, for 1M entries, a compressed file of size 142 KiB. I used xz for compression, which seems to be the best amongst gzip, bzip2, and xz. I suspect this method will have the file grow only very slightly worse than linear with the number of entries. Best regards, jj
participants (6)
-
Allan Wechsler -
Fred Lunnon -
Jeffrey Shallit -
Joerg Arndt -
rcs@xmission.com -
Tomas Rokicki