Omer Angel, Ander Holroyd, James Martin and I have proved a result that I'm 90% sure isn't new, but we haven't been able to find a reference: Suppose we have discrete probability distributions pi_1, pi_2, ... on some not necessarily finite set S. Then it is possible to choose a sequence s_1, s_2, ... in S such that for any k geq 1 and any s in S, #{i: 1 leq i leq k and s_i = s} lies within 1 of pi_1 (s) + pi_2 (s) + ... + pi_k (s). Note that pi_1 (s) + pi_2 (s) + ... + pi_k (s) is exactly the expected value of #{i: 1 leq i leq k and s_i = s} if we were to choose s_i randomly in accordance with pi_i for all i leq k. So the theorem says we can make a deterministic choice that makes the counts #{i: 1 leq i leq k and s_i = s} stay within 1 of their expected values under random choice, and achieves this simultaneously for all k geq 1 and all s in S. If you like, you can take pi_1 = pi_2 = ... = some fixed distribution pi; this special case is not any easier than the general result (which is a little tricky but is not in the end very deep). Has anyone seen this before? It seems like the sort of thing operations researchers would have discovered decades ago. Thanks, Jim