Re: [math-fun] Could change of base (binary ?> ternary) speed up computation?
Dan Asimov <dasimov@earthlink.net> wrote:
Since useful quanputers might be a long ways off, maybe some other stunt could speed up current computation as we know it.
If trits could be stored, retrieved, and copied as easily as bits, there would be less *space* needed by a factor of of log_c(2)/log_c(3) = .6309+, where c = zeta(3).
It's hard to tell with text whether someone is joking. I assume you know that log(a)/log(b) always has the same value regardless of the base of the log. For instance I was recently debating with myself whether to express a number as log2(10), 1/log10(2), or as log(10)/log(2). In the last case, I don't have to worry about the base. I'd think you'd also know that changing the radix of a computer may change its speed by some constant, but wouldn't change the big-O category of any algorithm. Replacing it with a quantum computer would, at least for a few special algorithms, such as Shor's. I've read that the Russians once had a computer that was based on balanced ternary. As for space, that's true of a trit takes the same space to store as a bit, but why should that be?
Perhaps someone knowledgeable can estimate how much trits could actually speed up computation. Surely this has been much studied by now.
I can't see why it would speed it up at all, or increase the precision, so long as the word size in bit-equivalents remained the same. Computation is generally done one word at a time. A decimal computer would be slightly faster if you want the results in decimal. (I assume few people want results in ternary.) And it also avoids some rounding errors that binary doesn't, since it can exactly represent more fractions. But to maximize that, you'd want base 30, base 210, or some other primorial. Of course a binary (or ternary, etc.) computer can be programmed to emulate any other radix, at the cost of some speed and some space. Or to do exact rational calculations by storing everything as integer numerators and denominators.
But the practical question is, Is there a practical way to build trit-based processors?
Certainly. But fewer states are always easier, and two is the fewest you can have, since one state isn't good for anything. Similarly, life anywhere in the universe ought to evolve to have two sexes, not more, despite that novel your uncle wrote where they had three.
Would it be easier if they could be as large as a room, or larger?
Easier for what? Easier for hand assembly of individual transistors? Sure, for that you'd want components that are a comfortable size to handle. And it would result in a computer with the power of an iPhone filling, not just a room, but a large building. And the speed-of- light delay means that it would run very slowly by today's standards. Tom Knight <tk@mit.edu> wrote:
Binary is optimal for information storage and transmission. This is because for a fixed amount of noise (from either environment or fabrication issues) binary requires only a single distance between states, whereas higher radix storage requires n-1.
That doesn't sound right to me. Only the most primitive modems used on-off keying or two-tone frequency shift keying. Information theory says that the most efficient way to use bandwidth involves having a vast number of subtly different states. Of course each state can encode multiple bits, which can be unpacked at the receiving end.
Energy dissipation is now the main issue for most computation. The way forward is not higher radices, but reversible low-dissipation computing, which for reasons I don?t understand, no one is taking seriously.
Maybe because we're still many orders of magnitude away from the kT log(2) limit. Reversible computation is only useful for breaking that limit. Also it generally involves computing very slowly. By the time we've approached that limit, it might be practical to do most of our computation far from the sun, where T is much lower. (Running a computer in an air conditioned room doesn't really gain you anything.) Dyson claims that we, or rather our cybernetic descendants or uploads, ought to be able to live forever in an expanding universe by running slower and slower, colder and colder, doing infinite computation in infinite time using finite total energy. Critics claim that that won't be possible since clocks can't be made arbitarily slow due to quantum limits, or because the background temperature of space isn't asymptotic to absolute zero due to Unruh radiation or other effects.
Dan Asimov <dasimov@earthlink.net> wrote: Tom Knight <tk@mit.edu> wrote:
Binary is optimal for information storage and transmission. This is because for a fixed amount of noise (from either environment or fabrication issues) binary requires only a single distance between states, whereas higher radix storage requires n-1.
That doesn't sound right to me. Only the most primitive modems used on-off keying or two-tone frequency shift keying. Information theory says that the most efficient way to use bandwidth involves having a vast number of subtly different states. Of course each state can encode multiple bits, which can be unpacked at the receiving end.
There is a huge difference between signaling over a telephone ine with a modem and signalling between two computer components. The phone line is in a high signal to noise environment with very limited bandwidth. The computer chip is a in low signal to noise envirnoment with very high bandwidth. You could employ the modem like techniques, but the cost in latency and complexity would be very high, and the benefit quite low. It’s far easier to improve the channel rather than to optimize coded bit translation across it. And with those constraints, binary is still best.
Energy dissipation is now the main issue for most computation. The way forward is not higher radices, but reversible low-dissipation computing, which for reasons I don?t understand, no one is taking seriously.
Maybe because we're still many orders of magnitude away from the kT log(2) limit. Reversible computation is only useful for breaking that limit. Also it generally involves computing very slowly.
This has little to do with the kT log(2) limit. Energy can be recovered from macroscopic signalling with inductive power recovery. Watts of it. And the speed limits are only applicable to capacitive only circuits, not to inductive/capacitive ones.
By the time we've approached that limit, it might be practical to do most of our computation far from the sun, where T is much lower. (Running a computer in an air conditioned room doesn't really gain you anything.) Dyson claims that we, or rather our cybernetic descendants or uploads, ought to be able to live forever in an expanding universe by running slower and slower, colder and colder, doing infinite computation in infinite time using finite total energy. Critics claim that that won't be possible since clocks can't be made arbitarily slow due to quantum limits, or because the background temperature of space isn't asymptotic to absolute zero due to Unruh radiation or other effects.
I’ll let others worry about arbitrarily slow clocks. I like ones running at multiple GHz.
Some thoughts from the physics side of the house. 1) If the multiple levels are represented by physical quantities like charge, voltage, etc., the energy tends to go as the square, so stacking bits increases energy faster than it increases information. (the way to beat this is to use multiple independent degrees of freedom, but if you thought ternary was complicated, try to build a computer with multiple independent degrees of freedom). 2) Low temperature sounds great, but energy tends to go down with temperature much slower than heat transfer. Low temperature computers are really hard to cool. 3) Practical implementation of adiabatic computation is limited by the fact that there are no perfect switches at finite temperature. To work, adiabatic schemes have to slosh energy from one reservoir to another with some kind of a switch. At finite temperature, these switches have "soft thresholds" that depend on T -- think of a diode where I=Io*exp(qV/kT) The result is that the switches have an effective finite voltage drop when they are suppose to be in the "on" state. Practically, this is 0.3 to 0.6V at room temperature--tough to win big when CMOS can operate at voltages just above this with far less complexity. --R On Sun, Jan 6, 2019 at 6:56 PM Tom Knight <tk@mit.edu> wrote:
Dan Asimov <dasimov@earthlink.net> wrote: Tom Knight <tk@mit.edu> wrote:
Binary is optimal for information storage and transmission. This is because for a fixed amount of noise (from either environment or fabrication issues) binary requires only a single distance between states, whereas higher radix storage requires n-1.
That doesn't sound right to me. Only the most primitive modems used on-off keying or two-tone frequency shift keying. Information theory says that the most efficient way to use bandwidth involves having a vast number of subtly different states. Of course each state can encode multiple bits, which can be unpacked at the receiving end.
There is a huge difference between signaling over a telephone ine with a modem and signalling between two computer components. The phone line is in a high signal to noise environment with very limited bandwidth. The computer chip is a in low signal to noise envirnoment with very high bandwidth. You could employ the modem like techniques, but the cost in latency and complexity would be very high, and the benefit quite low. It’s far easier to improve the channel rather than to optimize coded bit translation across it. And with those constraints, binary is still best.
Energy dissipation is now the main issue for most computation. The way forward is not higher radices, but reversible low-dissipation computing, which for reasons I don?t understand, no one is taking seriously.
Maybe because we're still many orders of magnitude away from the kT log(2) limit. Reversible computation is only useful for breaking that limit. Also it generally involves computing very slowly.
This has little to do with the kT log(2) limit. Energy can be recovered from macroscopic signalling with inductive power recovery. Watts of it. And the speed limits are only applicable to capacitive only circuits, not to inductive/capacitive ones.
By the time we've approached that limit, it might be practical to do most of our computation far from the sun, where T is much lower. (Running a computer in an air conditioned room doesn't really gain you anything.) Dyson claims that we, or rather our cybernetic descendants or uploads, ought to be able to live forever in an expanding universe by running slower and slower, colder and colder, doing infinite computation in infinite time using finite total energy. Critics claim that that won't be possible since clocks can't be made arbitarily slow due to quantum limits, or because the background temperature of space isn't asymptotic to absolute zero due to Unruh radiation or other
effects.
I’ll let others worry about arbitrarily slow clocks. I like ones running at multiple GHz.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Jan 7, 2019, at 12:32 PM, Richard Howard <rich@richardehoward.com> wrote:
Some thoughts from the physics side of the house.
1) If the multiple levels are represented by physical quantities like charge, voltage, etc., the energy tends to go as the square, so stacking bits increases energy faster than it increases information. (the way to beat this is to use multiple independent degrees of freedom, but if you thought ternary was complicated, try to build a computer with multiple independent degrees of freedom)
Agree. This is the point I tried (but obviously failed) to make.
2) Low temperature sounds great, but energy tends to go down with temperature much slower than heat transfer. Low temperature computers are really hard to cool.
Agreed.
3) Practical implementation of adiabatic computation is limited by the fact that there are no perfect switches at finite temperature. To work, adiabatic schemes have to slosh energy from one reservoir to another with some kind of a switch. At finite temperature, these switches have "soft thresholds" that depend on T -- think of a diode where I=Io*exp(qV/kT) The result is that the switches have an effective finite voltage drop when they are suppose to be in the "on" state. Practically, this is 0.3 to 0.6V at room temperature--tough to win big when CMOS can operate at voltages just above this with far less complexity.
I don’t agree with this. CMOS gates do not exhibit a source/drain voltage drop when on. They do require a gate voltage of the magnitude you suggest, but this is provided by the previous state of the computer, and the energy required to produce it is recoverable by reversing the computation.
There was an adiabatic computing effort with some really smart folks in the early 1990's They even made chips. Nobody could find a way to move the charge without the "softness" of diodes getting in the way. Ultimately dropped and the direction was just voltage reduction on standard circuits. Anybody who has a way to move charge around without loss can revive the field. This is sort of like the Maxwell Daemon arguments--got to count the metabolic cost of the imp... Google "John Denker adiabatic computing" for a review in "Proceedings of 1994 IEEE Symposium on Low Power Electronics" On Mon, Jan 7, 2019 at 1:09 PM Tom Knight <tk@mit.edu> wrote:
On Jan 7, 2019, at 12:32 PM, Richard Howard <rich@richardehoward.com> wrote:
Some thoughts from the physics side of the house.
1) If the multiple levels are represented by physical quantities like charge, voltage, etc., the energy tends to go as the square, so stacking bits increases energy faster than it increases information. (the way to beat this is to use multiple independent degrees of freedom, but if you thought ternary was complicated, try to build a computer with multiple independent degrees of freedom)
Agree. This is the point I tried (but obviously failed) to make.
2) Low temperature sounds great, but energy tends to go down with temperature much slower than heat transfer. Low temperature computers are really hard to cool.
Agreed.
3) Practical implementation of adiabatic computation is limited by the
fact
that there are no perfect switches at finite temperature. To work, adiabatic schemes have to slosh energy from one reservoir to another with some kind of a switch. At finite temperature, these switches have "soft thresholds" that depend on T -- think of a diode where I=Io*exp(qV/kT) The result is that the switches have an effective finite voltage drop when they are suppose to be in the "on" state. Practically, this is 0.3 to 0.6V at room temperature--tough to win big when CMOS can operate at voltages just above this with far less complexity.
I don’t agree with this. CMOS gates do not exhibit a source/drain voltage drop when on. They do require a gate voltage of the magnitude you suggest, but this is provided by the previous state of the computer, and the energy required to produce it is recoverable by reversing the computation.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (3)
-
Keith F. Lynch -
Richard Howard -
Tom Knight