You are correct. After additional Googling, I found SABL ("Sense Amplifier Based Logic"), by UCLA's Tiri, et al., (2002). In additional to constant static power consumption, SABL provides "equal" dynamic power consumption, regardless of bit patterns. "Equal" here doesn't mean constant, hence it is better than ECL; however, there is a mean "constant- sized" current spike every clock cycle where essentially every gate+wire changes state. SABL costs ~2x in size (wires, gates), perhaps ~2X in speed, and at least ~4x in power consumption over "standard" CMOS. I'm coming around to a "new" (?) idea: in addition to an extensive on-chip clock distribution network, future secure chips may require an equally extensive on-chip *randomness distribution network* to guarantee a source of randomness for every gate and every wire. Such a distribution network may require 1-2 additional wiring layers, but I doubt it will double the number of layers. Simply randomizing the clock itself doesn't do much good if someone can access the clock to trigger logging. (Most digital systems already use some form of "spread spectrum" PRNG clocking in order not to cause RFI and run afoul of the FCC.) Asynchronous systems may not have synchronized clocks, but they do tend to do roughly the same thing when faced with the same computation, so merely asynchronizing circuitry won't help a lot unless one also actively adds additional randomness in some way. Also, asynchronous circuits try to always run at top speed, which maximizes dynamic power consumption and electromagnetic emissions, so free-running asynchronous logic is probably not what is wanted for these applications. At 08:15 PM 5/19/2018, Tomas Rokicki wrote:
Just having inverse signals doesn't solve the problem. CMOS power consumption is primarily *switching* from 0 to 1 or 1 to 0 (except for leakage of course).
So to make this truly work you'd need to somehow have logic that for every 0-1 transition on one gate, there's a lack of transition on a nearby gate.
-tom
On Sat, May 19, 2018 at 5:59 PM, Henry Baker <hbaker1@pipeline.com> wrote:
First step, combinational logic:
"Differential signaling": for every signal wire A, there's an equal and opposite signal wire (not A).
For every gate, there's a paired dual gate: AND <-> OR; NOT <-> NOT.
Blows up every wire to double wires, blows up every gate to double gates.
In a perfect (quasi static) world, the total power would remain the same regardless of the parity of any of the input signals.
This mirrored logic is a kind of "super CMOS".
Yes, this circuitry is twice as expensive as regular circuitry, but it has additional benefits: power side channel resistance, *noise resistance* -- including *radiation hardening*.
Remaining problems: bit *changes* are still glaringly obvious; deep levels of combinational logic could get out of *sync*, unless locally synchronized or clocked.
So far, this additional expense is easily doable, so I don't know why this mirroring couldn't be routine.
At 02:39 PM 5/16/2018, Henry Baker wrote:
Constant power might not be too cool, but may be unavoidable for certain calculations -- e.g., AES encryption/decryption. It would be an interesting theorem if constant power were the *only* solution.
Some sort of "tri-stating with memory" might allow lower power consumption for the vast majority of wires that won't be changing on every clock cycle. Having a wire stay at either "0" or "1" for long periods of time doesn't cost anything, but changing the state of a wire does cost the charging/discharging of the capacitance of the wire.
In this type of logic, the problem won't be "1's" or "0's", but transitions: "+1", "-1". So the game may be to have a double-width bus where half the bus gets XOR'd with <secret> and the other half gets XOR'd with not-<secret>, so that exactly length-<secret> bits change.
I think it may be possible to build systems having both "secret" and "non-secret" bits; the only constraint is that the instantaneous power is independent of the secret bits -- i.e., the power computation function is a *tautology* of the secret bits.
This leads us to consider a circuit design system which outputs circuits both for the value to be computed, as well as another circuit which computes the power/energy consumed by the computation. It is up to the circuit design system to make sure that the two sets of circuits are in sync.
While we're at it, we can also output a circuit that computes the *timing* for the value-computing circuit. For many situations, this circuit may be trivial, but for others -- e.g., carry-propagation -- this circuit may not be trivial.
At 01:47 PM 5/16/2018, Tom Knight wrote:
There are circuit topologies (as opposed to semiconductor technologies) that compute both the true and complement of every signal. To a very good first approximation, the power dissipated is constant in these circuits independent of the data flowing through them. The downside is that power dissipation is constant, but high. ECL logic is an early bipolar version, but similar topologies can be built with NMOS or CMOS technologies. It might be appropriate to use these topologies in security-sensitive applications.
On May 16, 2018, at 1:46 PM, Henry Baker <hbaker1@pipeline.com> wrote: I've been thinking about the *power* side-channel: the ability to watch instantaneous power consumption to guess what a computer is computing.
Closely related: the chip temperature side-channel: the ability to watch instantaneous temperature distributions across a chip to guess what a computer is computing.
Note that simple power supply filtering doesn't work well enough, as one might be able to watch enough computation to still be able to discern some amount of information.
Since many computers would like to keep confidential what they are computing, the question is raised:
**Are there computer arithmetic circuits which draw the same sequence of instantaneous power draws *regardless* of the numbers being computed or moved?**
For example, some computer circuit may draw slightly more power when a "1" appears on a bus instead of a "0". Under these conditions, it might make sense to drive the bus with both the number and its binary complement, in order to keep the power draw the same, no matter what bit pattern is being operated on.
Are there particular number representations and arithmetic circuits (or even *boolean circuits*) whose power consumption is indifferent ("oblivious") to the input bit patterns?
Note that CMOS typically utilizes both PNP and NPN transistors in a complementary fashion. However, due to semiconductor physics, these transistors are not 100% complementary -- especially at high clock rates -- and therefore they don't provide as much obliviousness as one would like, so assume for this conversation that we might still have to mirror even CMOS gates. -- -- http://cube20.org/ -- http://golly.sf.net/ --