Mike Stay wrote:
The question, of course, depends on the programming language.
Fix one and then it'll make sense to play code golf. Otherwise, I can just define a language with a built-in operation that
takes in an
arbitrary pair and outputs the primes, so (0, op) would be the smallest. I disagree. First, that's not a question of programming language, but a question of mathematical operations. Second, the idea is to have a "universal" function, which is not specific to the encoded sequence (here: the primes): All that is specific to that sequence should be encoded in the initial value. The very same function should work as well (or at least reasonably well) for other strictly increasing positive integer sequences, of course with a different initial value that must be computable (in principle, to any desired precision) for that function and a given sequence. Of course it can be expected that the success of one function vs the other might vary depending on growth and maybe other properties of the encoded integer sequence. But a function that always produces the primes independently of the initial value would disqualify here. Maybe that main idea is already enough to eliminate the relevance of complicated and artificially hand-crafted functions. But just in case, we can also easily fix clearly defined classes of allowed functions - through the allowed operations (in all examples, the iterated function is constructed using only +, -, *, / and floor(.)), - and by fixing a maximum overall complexity of the function (obtained by adding up the complexity value assigned to each allowed function (or operation), e.g., complexity 1 for each floor(.) ; complexity 2 for each +, -, * or / ; and maybe larger values for other possibly allowed functions (exp, log, Gamma, ...) It would be interesting to know whether more complex functions allow a more efficient encoding of (arbitrary or given type of) integer sequences. If so, we can consider "records" depending on the maximum complexity of the function. Finally, it's also possible that various functions have their best performance in different ranges: e.g. one might be "ahead" (requiring less precision on the initial value) to produce the first 100 terms, another one would be "better" for producing 200 to 500 terms, and yet another one beyond that. But even in that case, the question is interesting. I think we can measure the "performance" of a function by looking at ( - log of) the difference between the largest and smallest initial value that would produce (a(1), ..., a(n)), even though that might not always be a monotonous function. That might be a curve of roughly the same shape, independently of the functions and/or encoded sequences (maybe, as long as these are roughly similar). We could check this already for the four examples at hand. If so, then we can compare the performance by simply comparing the respective slopes. (That should be consistent with, if not equivalent to the values from Leo, i.e., number of correct terms for float64, i.e., epsilon = 2^-53.) - Maximilian On Wed, Dec 2, 2020 at 4:30 PM Leo Broukhis wrote:
Apropos https://www.youtube.com/watch?v=_gCKX6VMvmU
There are many ways to produce the sequence of primes by iterating a function, e.g. starting with f_1 = ContinuedFraction(2, 3, 5, 7, 11, 13, ,,,) = 2.3130367364335829..., (http://oeis.org/A064442) and
f_n+1 = 1/(f_n - Floor(f_n))
the sequence of Floor(f_n) will be the sequence of primes.
Or, as in the clip, starting with the average of http://oeis.org/A053669 = 2.920050977316134712..., and f_n+1 = Floor(f_n) * (f_n - Floor(f_n) + 1) will do the same.
Or the Blazys continued fraction, see http://oeis.org/A233588
The question is, which constant/function pair encodes primes in the most compact way. More precisely, given a denominator for a rational approximation, which function will correctly produce the most primes starting from the approximation of the corresponding constant?
For example, with the IEEE double (64 bit floating point) approximations of the constants, the continued fraction starts diverging at 29, the Blazys continued fraction - at 47, the 2.92... constant - at 59.
I was able to come up with the function f_n+1 = Floor(f) + 1/(f_n-Floor(f_n)); which is the delta-coded continued fraction of the sequence of primes, gives rise to f_1 = 2.7101020234300968374... and given its IEEE double approximation, starts diverging at 67.
Improvements are welcome.