Re: [math-fun] true random generators
Salamin's idea with 2 photodetectors requires adjustment to make them balanced, otherwise you'll get, say 70% "1" bits and 30% "0" bits. E.g. an adjustable iris. Also, it is annoying to have to use funny components like light bulbs, which seem non-miniaturizable. More generally, I do not like any design idea involving explicit enormous amplification factors, such as 1 photon amplified all the way up to a logic signal, or tiny noise amplified all the way up to logic signal. The Intel idea -- using a bistable static RAM bit-cell, turn it on, it assumes randomly either an 0 or 1 state -- also requires careful balancing at manufacturing time, otherwise you may get cells which turn on 99.99% of the time to "1." Also it might behave differently at different temperatures. My idea -- iterating F(x) where F is an appropriate function computed by analog circuit -- seems immune to temperature issues. Maybe in theory it is immune to balancing issues but in practice it will not be. But even without balancing it still should be reasonably balanced. Although ultimately all its randomness comes from thermal and quantum noise, it only involves low amplification gains, like factor 2, which should enable higher speed operation. The Intel idea is the simplest and the fastest. Perhaps the Intel idea can be "actively tuned," i.e. if it produces too many 1s, some controller circuit turns a knob to cure that. That however removes a goodly amount of its simplicity advantage.
You can just use the de-biasing trick where you take your binary stream (assumed to be i.i.d. but possibly biased): 0011111010110101001001101011011010... and split it into pairs of bits: 00 11 11 10 10 11 01 01 00 10 01 10 10 11 01 10 10 ... and apply the mapping: '00' --> '' '11' --> '' '01' --> 'A' '10' --> 'B' to obtain the unbiased i.i.d. bitstream: B B A A B A B B A B B ... Best wishes, Adam P. Goucher
Sent: Sunday, March 20, 2016 at 1:25 AM From: "Warren D Smith" <warren.wds@gmail.com> To: math-fun@mailman.xmission.com Subject: Re: [math-fun] true random generators
Salamin's idea with 2 photodetectors requires adjustment to make them balanced, otherwise you'll get, say 70% "1" bits and 30% "0" bits. E.g. an adjustable iris. Also, it is annoying to have to use funny components like light bulbs, which seem non-miniaturizable.
More generally, I do not like any design idea involving explicit enormous amplification factors, such as 1 photon amplified all the way up to a logic signal, or tiny noise amplified all the way up to logic signal.
The Intel idea -- using a bistable static RAM bit-cell, turn it on, it assumes randomly either an 0 or 1 state -- also requires careful balancing at manufacturing time, otherwise you may get cells which turn on 99.99% of the time to "1." Also it might behave differently at different temperatures.
My idea -- iterating F(x) where F is an appropriate function computed by analog circuit -- seems immune to temperature issues. Maybe in theory it is immune to balancing issues but in practice it will not be. But even without balancing it still should be reasonably balanced. Although ultimately all its randomness comes from thermal and quantum noise, it only involves low amplification gains, like factor 2, which should enable higher speed operation.
The Intel idea is the simplest and the fastest.
Perhaps the Intel idea can be "actively tuned," i.e. if it produces too many 1s, some controller circuit turns a knob to cure that. That however removes a goodly amount of its simplicity advantage.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Yes, but if it's extremely biased, you need a lot of input bits to get a few output bits. On 19-Mar-16 21:45, Adam P. Goucher wrote:
You can just use the de-biasing trick where you take your binary stream (assumed to be i.i.d. but possibly biased):
0011111010110101001001101011011010...
and split it into pairs of bits:
00 11 11 10 10 11 01 01 00 10 01 10 10 11 01 10 10 ...
and apply the mapping:
'00' --> '' '11' --> '' '01' --> 'A' '10' --> 'B'
to obtain the unbiased i.i.d. bitstream:
B B A A B A B B A B B ...
Best wishes,
Adam P. Goucher
Sent: Sunday, March 20, 2016 at 1:25 AM From: "Warren D Smith" <warren.wds@gmail.com> To: math-fun@mailman.xmission.com Subject: Re: [math-fun] true random generators
Salamin's idea with 2 photodetectors requires adjustment to make them balanced, otherwise you'll get, say 70% "1" bits and 30% "0" bits. E.g. an adjustable iris. Also, it is annoying to have to use funny components like light bulbs, which seem non-miniaturizable.
More generally, I do not like any design idea involving explicit enormous amplification factors, such as 1 photon amplified all the way up to a logic signal, or tiny noise amplified all the way up to logic signal.
The Intel idea -- using a bistable static RAM bit-cell, turn it on, it assumes randomly either an 0 or 1 state -- also requires careful balancing at manufacturing time, otherwise you may get cells which turn on 99.99% of the time to "1." Also it might behave differently at different temperatures.
My idea -- iterating F(x) where F is an appropriate function computed by analog circuit -- seems immune to temperature issues. Maybe in theory it is immune to balancing issues but in practice it will not be. But even without balancing it still should be reasonably balanced. Although ultimately all its randomness comes from thermal and quantum noise, it only involves low amplification gains, like factor 2, which should enable higher speed operation.
The Intel idea is the simplest and the fastest.
Perhaps the Intel idea can be "actively tuned," i.e. if it produces too many 1s, some controller circuit turns a knob to cure that. That however removes a goodly amount of its simplicity advantage.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Actually nowadays, optoelectronics (light emitters and detectors) can be integrated onto a chip. On the other hand, if it's DIY, or at least all the circuitry visible, you can be sure there's no hidden backdoor. I don't think accurate balancing is needed. The entropy per bit is S = -p log p - (1-p) log(1-p). So S(50,50) = 0.693, S(60,40) = 0.673 for a fractional loss of 3%. And S(75,25) = 0.562, a loss of 19%. That can be compensated by correspondingly increasing the key length. Beware of high bandwidth analog amplifiers. With a teeny bit of capacitance between input and output, you have an oscillator. Supposedly modern Intel CPUs already have a built in RNG, partly physical, partly pseudo. Has anyone used it yet, or know how to access it? -- Gene From: Warren D Smith <warren.wds@gmail.com> To: math-fun@mailman.xmission.com Sent: Saturday, March 19, 2016 6:25 PM Subject: Re: [math-fun] true random generators Salamin's idea with 2 photodetectors requires adjustment to make them balanced, otherwise you'll get, say 70% "1" bits and 30% "0" bits. E.g. an adjustable iris. Also, it is annoying to have to use funny components like light bulbs, which seem non-miniaturizable. More generally, I do not like any design idea involving explicit enormous amplification factors, such as 1 photon amplified all the way up to a logic signal, or tiny noise amplified all the way up to logic signal. The Intel idea -- using a bistable static RAM bit-cell, turn it on, it assumes randomly either an 0 or 1 state -- also requires careful balancing at manufacturing time, otherwise you may get cells which turn on 99.99% of the time to "1." Also it might behave differently at different temperatures. My idea -- iterating F(x) where F is an appropriate function computed by analog circuit -- seems immune to temperature issues. Maybe in theory it is immune to balancing issues but in practice it will not be. But even without balancing it still should be reasonably balanced. Although ultimately all its randomness comes from thermal and quantum noise, it only involves low amplification gains, like factor 2, which should enable higher speed operation. The Intel idea is the simplest and the fastest. Perhaps the Intel idea can be "actively tuned," i.e. if it produces too many 1s, some controller circuit turns a knob to cure that. That however removes a goodly amount of its simplicity advantage.
Gene, what does the entropy per bit tell us about the TRNG ? (If the answer is contained in what you wrote below, I'm too ignorant about RNG's to get it.) —Dan
On Mar 19, 2016, at 7:04 PM, Eugene Salamin via math-fun <math-fun@mailman.xmission.com> wrote:
I don't think accurate balancing is needed. The entropy per bit is S = -p log p - (1-p) log(1-p). So S(50,50) = 0.693, S(60,40) = 0.673 for a fractional loss of 3%. And S(75,25) = 0.562, a loss of 19%. That can be compensated by correspondingly increasing the key length.
If the entropy per bit is s, the total entropy of a sequence of N0 bits is S = N0 s. The number of distinct sequences, or the number of keys that need to be tried in a brute force attack, is approximately N = exp(S). Knowing that provides no hint about how to choose the trial keys; it's just saying what a very clever cryptanalyst could do in principle. Similarly, if S is the thermodynamic entropy of a physical system, the number of distinct quantum states that the system can be in, consistent with its thermodynamic state, is N = exp(S/k). When heat energy dQ is added to a system at temperature T, the entropy increase is dS = dQ/T, so that the units of entropy are (energy/temperature). Boltzmann's constant is k = 1.38e-23 J/K. Looking into tables of thermodynamic data, at 25 C and 1 atm pressure, the standard molar entropy of diamond is 2.377 J/(K mol). A 1 mole (12 g) diamond crystal, in equilibrium with its external environment, shuffles among 10^(7.5×10²²) quantum states. The standard molar entropy of uranium hexafluoride gas is 378 J/(K mol). A mole (352 g) of UF6 shares 10^(1.2×10²⁵) quantum states. -- Gene From: Dan Asimov <dasimov@earthlink.net> To: Eugene Salamin <gene_salamin@yahoo.com>; math-fun <math-fun@mailman.xmission.com> Sent: Sunday, March 20, 2016 1:37 AM Subject: Re: [math-fun] true random generators Gene, what does the entropy per bit tell us about the TRNG ? (If the answer is contained in what you wrote below, I'm too ignorant about RNG's to get it.) —Dan
On Mar 19, 2016, at 7:04 PM, Eugene Salamin via math-fun <math-fun@mailman.xmission.com> wrote:
I don't think accurate balancing is needed. The entropy per bit is S = -p log p - (1-p) log(1-p). So S(50,50) = 0.693, S(60,40) = 0.673 for a fractional loss of 3%. And S(75,25) = 0.562, a loss of 19%. That can be compensated by correspondingly increasing the key length.
It's worth noting that there is an interesting crypto problem buried in this discussion that is largely unappreciated. (A low power random number generator would be a good start to a solution). One vision of the future has sensors sprinkled around the environment sending their measurements back to various decision making and actuating notes. Since batteries are not getting better (~1 eV/atom at best), these sensors have to be small and highly efficient. How do you do crypto, authentication, validation, etc. on a 4, 8 (or maybe 16) bit processor with a severely limited energy budget and a short word length.. Already engineers have demonstrated that they can access the main network in a car by spoofing the un-encrypted tire pressure sensors. This is just the start. There is room for some interesting algorithmic innovation here. --Rich On Sun, Mar 20, 2016 at 1:04 PM, Eugene Salamin via math-fun < math-fun@mailman.xmission.com> wrote:
If the entropy per bit is s, the total entropy of a sequence of N0 bits is S = N0 s. The number of distinct sequences, or the number of keys that need to be tried in a brute force attack, is approximately N = exp(S). Knowing that provides no hint about how to choose the trial keys; it's just saying what a very clever cryptanalyst could do in principle.
Similarly, if S is the thermodynamic entropy of a physical system, the number of distinct quantum states that the system can be in, consistent with its thermodynamic state, is N = exp(S/k). When heat energy dQ is added to a system at temperature T, the entropy increase is dS = dQ/T, so that the units of entropy are (energy/temperature). Boltzmann's constant is k = 1.38e-23 J/K.
Looking into tables of thermodynamic data, at 25 C and 1 atm pressure, the standard molar entropy of diamond is 2.377 J/(K mol). A 1 mole (12 g) diamond crystal, in equilibrium with its external environment, shuffles among 10^(7.5×10²²) quantum states. The standard molar entropy of uranium hexafluoride gas is 378 J/(K mol). A mole (352 g) of UF6 shares 10^(1.2×10²⁵) quantum states.
-- Gene
From: Dan Asimov <dasimov@earthlink.net> To: Eugene Salamin <gene_salamin@yahoo.com>; math-fun < math-fun@mailman.xmission.com> Sent: Sunday, March 20, 2016 1:37 AM Subject: Re: [math-fun] true random generators
Gene, what does the entropy per bit tell us about the TRNG ?
(If the answer is contained in what you wrote below, I'm too ignorant about RNG's to get it.)
—Dan
On Mar 19, 2016, at 7:04 PM, Eugene Salamin via math-fun < math-fun@mailman.xmission.com> wrote:
I don't think accurate balancing is needed. The entropy per bit is S = -p log p - (1-p) log(1-p). So S(50,50) = 0.693, S(60,40) = 0.673 for a fractional loss of 3%. And S(75,25) = 0.562, a loss of 19%. That can be compensated by correspondingly increasing the key length.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Here's a company that is marketing true random number generators based on quantum effects, apparently fluctuations in the intensity of light. http://www.whitewoodencryption.com/ -- Gene
I have to confess that it niggles me to hear hardware RNG's dubbed "true", given that any perceived advantage they might yield is based entirely on theoretical models of how reality behaves --- models which in the past have invariably been superseded as observations improve, and which in the case of quantum mechanics remain based upon questionable logical foundations. But that's incidental to my purpose, which is to suggest another situation in which software generators seem unavoidable: low-discrepancy "QRNG"s for Monte-Carlo integration, etc. --- see https://en.wikipedia.org/wiki/Low-discrepancy_sequence Fred Lunnon On 3/21/16, Eugene Salamin via math-fun <math-fun@mailman.xmission.com> wrote:
Here's a company that is marketing true random number generators based on quantum effects, apparently fluctuations in the intensity of light.
http://www.whitewoodencryption.com/
-- Gene
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (7)
-
Adam P. Goucher -
Dan Asimov -
Eugene Salamin -
Fred Lunnon -
Mike Speciner -
Richard Howard -
Warren D Smith