This suggests a game where I am given a sequence of short vectors (which I know ahead of time) and I get to choose whether to add or subtract each successive vector, with the goal of keeping the running total in some bounded region. Oh, and now I remember where I’ve seen this before! It’s Joel Spencer’s pusher-versus-chooser game: https://books.google.com/books?id=1fPRQS_zqukC&pg=PA32&lpg=PA32&dq=pusher+ch... But the theorem given in the book is for the L-2 norm, whereas what I’m asking about is for the L-infinity norm. I’ll ask Joel. Jim Propp On Thu, Oct 3, 2019 at 11:45 AM Andy Latto <andy.latto@pobox.com> wrote:
Consider the following constructive algorithm for determining the nth bit, given the choices already made for the first n-1 bits:
Compute the prediction accuracy for all million predictors on the first n-1 bits (order the predictors and choose the earliest tying predictor in this order to break ties), and see which one has a prediction accuracy furthest from .5. Choose the nth bit to make the accuracy of this predictor (very slightly) closer to .5.
My intuition says that this greedy algorithm works for all epsilon, but haven't been able to prove it (except for a single predictor, where the proof is trivial).
In the case of two predictors, it seems like the worst case is that at some point, one predictor is at (say) .550001 and the other is at .45, and from then on, any choice that makes the high predictor lower also makes the low predictor higher, so that you can't move one towards .5 without moving the other away from .5 But that just means that from that point on, both predictors are always making the same predictions. Then the greedy algorithm will alternate making the predictors wrong and right, and both will converge to .5, despite the bad things that happened at the beginning.
I haven't yet been able to make this intuition into a formal proof, but it convinces me, at least for two predictors, and probably for N.
Andy
On Thu, Oct 3, 2019 at 10:18 AM James Propp <jamespropp@gmail.com> wrote:
Good point. That strategy works not just against a million adversaries
but
against all of them at once!
Can anyone come up with something a bit more constructive?
Maybe instead of trying to trounce a million adversaries, we could think about trying to defeat just two.
Jim Propp
On Thu, Oct 3, 2019 at 9:40 AM Mike Stay <metaweta@gmail.com> wrote:
I want a (nonrandom) construction, not an existence proof.
I'm interpreting "construction" as metamathematical, not limiting the answer to constructive mathematics, and I'm interpreting this as "don't choose a bit string at random", not "don't use algorithmic randomness".
Chaitin's Omega number, like any algorithmically random real, has the property that the shortest program that outputs the first n bits of Omega has length greater or equal to n - c, where c depends only on the choice of prefix-free universal Turing machine. In the limit of large n, none of the bit predictors can do better than chance.
On Thu, Oct 3, 2019 at 5:44 AM James Propp <jamespropp@gmail.com> wrote:
Define a bit-guessing algorithm to be any algorithm that, given a
finite
string of bits as input, predict what the next bit will be; define the bit-guessing game as an adversarial game in which Player A chooses a bit-guessing algorithm, Player B chooses an infinite sequence of bits, and the score is the asymptotic density (maybe I should say upper density or lower density?) of the set of n’s for which A’s algorithm correctly predicts B’s nth bit.
Question: If Player B is given complete knowledge of the innards of a million bit-guessing algorithms, can B construct an infinite sequence of bits whose score lies between .49 and .51 for *all* of those algorithms? Of course by “a million” I mean “n”, and by .49 and .51 I mean one-half plus epsilon and one-half minus epsilon, with suitable quantifiers.
I want a (nonrandom) construction, not an existence proof.
We’ll allow B to consult an oracle that will answer questions about those million bit-guessing algorithms (e.g., the oracle will reveal the truth-status of the Riemann Hypothesis if that’s germane).
This is probably an old problem, but I’m not sure how to search for it.
Jim Propp _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike https://reperiendi.wordpress.com
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun