Wed, 09 Apr 2003 13:20:40 -0400 Thomas Colthurst <tcolthur@bbn.com> So I assume you know the old problem of "convert a sequence of biased coin flips (with unknown bias) into a (not necessarily same length) sequence of unbiased coin flips". I recently learned two interesting things about this problem: it was apparently first posed (& of course solved) by Johnny von Neumann and that it is used in certain Intel chips as part of a hardware random number generator process. (See http://www.cryptography.com/resources/whitepapers/IntelRNG.pdf for details.) ... I'm a little bit curious as to why Intel didn't implement a more efficient code. One guess is that they didn't know about them. Another is that the 2 bit code was good enough so as to make wasting silicon on more a complicated code not worth it. Still another possible reason is that the bias of the input coin flipper isn't exactly constant, but has (say) a small linear trend over time which is small enough to ignore over 2 bits but maybe not over longer lengths. I recall hearing a talk by Dave Gifford on using this process to produce unbiased random bits from a hardware source. I think he wrote a paper on it, circa 1980. I think it was published as an MIT LCS Tech Report. It may have an answer to why 2 bits are preferable to n > 2 (if, in fact, 2 *is* better).