On Fri, Oct 4, 2019 at 1:39 PM James Propp <jamespropp@gmail.com> wrote:
I don’t see why this will work. If there’s a tie for success-rate-furthest-from-0.5, moving one of them toward .5 might move the other farther away.
Yes. 'and if the first one moves A closer and B further away, then B will be further, and the next one will move B closer. Suppose it moves A further away, which feels like the worst case. Then we're back in a tie where we started. Repeat this for two million more guesses, and each of A and B has a million more right and a million more wrong, and both of their averages are now way closer to .5 than they used to be. On the other hand, if sometimes we make a guess that moves both A and B closer to .5, that seems even better, and like it will make both ratios converge to .5 faster. This isn't a formal proof, but do you see the intuition here? In fact, the pusher-chooser argument is exactly the solution to this problem. With N choosers, after k bits, the point in R^n (actually in Z^n) gives the excess of right over wrong guesses by the n predictors. The direction of the pusher will always have all it's coordinates +1 or -1, because predicting a 1 next will increase the excess for some predictors and decrease it for others, and predicting a -1 will have the exact opposite effect, so you have a choice of adding either a certain vector of length sqrt(N) or it;s negative to the vector that's tracking the excesses. Using the greedy algorithm of choosing the direction that keeps things closest to the origin, as shown in the reference you gave, means that after k steps, the distance to the origin is sqrt(kN) or less, so all the coordinates are sqrt(kN) or less. If a given coordinate has value x after k steps, then the accuracy ratio is (k/2 + x) / k = 1/2 + x/k. SInce x = O(sqrt k) (N being constant) this approaches 1/2 as k approaches infinity. QED. My suggested algorithm was to choose the direction at each step by minimizing the sup norm, rather than the L2 norm. I think this also works, though the convergence is slower. Andy
Jim Propp
On Thu, Oct 3, 2019 at 1:24 PM Brent Meeker via math-fun < math-fun@mailman.xmission.com> wrote:
Having produced a finite sequence of bits, to choose the next bit Player B runs all n algorithms and chooses the next bit value that will move the algorithm whose success rate is furthest from 0.5 toward 0.5.
Brent
On 10/3/2019 4:43 AM, James Propp wrote:
Define a bit-guessing algorithm to be any algorithm that, given a finite string of bits as input, predict what the next bit will be; define the bit-guessing game as an adversarial game in which Player A chooses a bit-guessing algorithm, Player B chooses an infinite sequence of bits, and the score is the asymptotic density (maybe I should say upper density or lower density?) of the set of n’s for which A’s algorithm correctly predicts B’s nth bit.
Question: If Player B is given complete knowledge of the innards of a million bit-guessing algorithms, can B construct an infinite sequence of bits whose score lies between .49 and .51 for *all* of those algorithms? Of course by “a million” I mean “n”, and by .49 and .51 I mean one-half plus epsilon and one-half minus epsilon, with suitable quantifiers.
I want a (nonrandom) construction, not an existence proof.
We’ll allow B to consult an oracle that will answer questions about those million bit-guessing algorithms (e.g., the oracle will reveal the truth-status of the Riemann Hypothesis if that’s germane).
This is probably an old problem, but I’m not sure how to search for it.
Jim Propp _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com