From: Warren Smith <warren.wds@gmail.com>
To: math-fun@mailman.xmission.com Sent: Sunday, November 13, 2011 3:21 PM Subject: [math-fun] How to generate an infinite number of true-random bits in hardware
How to generate true random bits in hardware Warren D. Smith Nov 2011
METHOD [I invented this at least 10 years ago]:
0. Let x be an analog electrical signal in -1<x<1 volt stored in "sample & hold" circuit #1.
1. Compute the analog signal y = 2*x*x-1 and feed it into sample & hold #2.
2. Compute the analog signal x = 2*y*y-1 and feed it into sample & hold #1.
3. return to step 1 (continue forever).
Instead of the function f(x)=2*x*x-1 it also is possible to employ the function g(x)=2|x|-1, however the corner then would need to be sharp, which probably is not realistic for a fast electrical circuit. g(x) is computable exactly by certain circuits made of ideal op-amps and diodes. f(x) can be computed from essentially ANY nonlinear electrical element with the aid of op-amps.
The random bits are: sign(x) and sign(y) after each update of x and y. With perfect computation of f(x) you will get true-random uncorrelated 50-50 coin toss bits, 1 bit per f- or g-iteration.
With somewhat imperfect computation of f(x) we will lose some perfection in our randomness, so then it is better to output only the sign(y) as our random bits and discard the x's.
WHY THIS WORKS: With g(x), the iteration is a self-map of [-1,1] with |slope|=2 everywhere, hence tiny electrical noise will get multiplied by 2^N after N iterations of g. This is, essentially, the left-shift map on the binary representation of a real number, hence we get 1 new bit of "randomness" shifted up from the low-significance bits per iteration.
With f(x), the variable change x=cos(t) means the iteration f(x) is just doubling t (mod 2pi). This is the shift map on t/pi, exactly this time (and not merely "essentially" the shift map).
With approximate f(x) the doubling factor might be only 1.9 (say) sometimes, and then we could get slightly less than 1 bit of true randomness per iteration but you are safely getting more than 1 bit per 2 iterations. I would prefer the bits be post-processed by digital algorithms to eliminate statistical problems caused by such imperfect manufacturing.
This is a fancy way to amplify noise. The problems with analog multipliers (inaccuracy and small bandwidth) can be avoided by just amplifying the noise directly, with AC coupling so as not to rail the amplifier. Also, placing the sample-and-hold within the feedback loop makes the system sensitive to S/H offsets. In addition, what prevents the voltage from exceeding the +- 1 volt range, and thus locking the circuit into a fixed state? Yet another hardware random bit generator, one that directly uses quantum randomness, consists of a LED and two photodetectors capable of detecting single photons. One detector generates the zeros, the other the ones. Such detectors are not cheap, and it is helpful to use the shortest wavelength LED you can get. None of these proposed methods are totally free of bias the the random output bits, and I don't believe any physical process can generate bits that are precisely 50/50. -- Gene