[math-fun] The Case Against Quantum Computing
In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field: https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput...
Dyakonov is taking a terrible risk with his assertion that quantum computers are *impossible*; history isn't kind to those who make such bold claims and are proven incorrect shortly thereafter. His article is interesting, but his question is deep, and so the article is far too short to even mention the basic issues. But before we start discussing *quantum* computing, we need to take a short tour of *digital* computing, which works extremely well, as this computer I'm typing on adequately demonstrates. Digital computing was also historically presented as living in a *continuous*, *classical* universe, with its normal complement of uncountably infinite states (thus partially disposing of one of Dyakonov's arguments against quantum computing); we'll ignore for the moment the *ultraviolet catastrophe* (the necessary consequence of a classical continuous universe), which was one of the seeds of the quantum revolution. The whole point of digital computing is to define/declare certain "regions" of the (continuous) phase space of a system as "0"'s and other regions of phase space as "1"'s, with the remainder of phase space being declared as "undefined"/"should never happen". A digital computing element is then the inductive step whose basic input values are "0"'s and "1"'s -- as defined above, and which computing elements are guaranteed to produce output values which are close to "pristine" versions of "0"'s and "1"'s. Thus, a simple digital identity function will map the *entire* "0" input domain into a *small subset* of the "0" output domain; similarly, it will map the *entire* "1" input domain into a "small subset" of the "1" output domain; it will *never* map a "0" to a "1", nor will it map a "1" to a "0". If the computing element is provided "undefined"/"should never happen" values as inputs, the computing element is free to produce "0", "1", "undefined"/"should never happen" or even *nothing at all* -- i.e., the computing element dies or goes into some meta-stable state. By induction, then, if the digital system starts in a proper state consisting entirely of "0"'s and "1"'s, and if each digital computing element performs as advertised, then the digital system will transition from {"0","1"} states into {"0","1"} states, and will never enter an "undefined"/"should not happen" state nor fail completely. Thus, a crucial function of *every* digital logic element is to "canonicalize" (aka "square up") its less-than-pristine inputs into very pristine outputs, so that the *margins* of the signals are preserved, and the none of the intermediate signals ever approaches "closely" to the outer boundaries of "0"'s or "1"'s. ***It is this "squaring up" process that enables digital computing processing to engage in 10^12 - 10^15 serialized steps with overall probability >1/2 of NO ERRORS.*** This ability to conduct such a huge amount of computation with essentially no errors is an astounding feat not fully grasped by those born after 1950. [We note that *error-correcting codes* can further improve the error-free performance of digital systems, but the main source of the error-resistance of digital systems lies in their "canonicalizing"/"squaring up" abilities.] The main squaring up ("amplifying") element in early digital circuits wasn't the identity gate, but the NOT gate, which converts an input "0" to an output "1" and vice versa. The fact that this gate *inverted* its input is confusing, but also inessential, so early digital designers developed techniques to *relabel* "0"'s as "1"'s and vice versa so they could ignore this messy detail. Let us now consider digital circuits from a *thermodynamic* point of view. Since *thermodynamics* is the flip side of *information theory* (the difference being a negative constant), it is essential to gain some understanding of digital circuits as a form of *heat engine*. To a first approximation, a digital amplifier/inverter is a *refrigerator*, which keeps something cold, while rejecting *waste heat* to the environment. What does this inverter keep cold? What is "cold"? One possible definition of "cold" is *closeness to pristine*; i.e., the *information embedded in the error* (vector distance from a "pristine" signal) is at a minimum. Thus, an inverter *squeezes out the error bits of information* from the output signal and rejects those error bits to the environment as *heat*. The output of the inverter has fewer error bits than does its input. So our refrigerator inverter is performing a continual process of extracting heat/error from the digital signal and thus keeping it cold/pristine. Most refrigerators have two parts: an active, heat-engine part, and a passive *insulating blanket* which keeps previously rejected heat from re-entering the inside cold portion of the refrigerator. A vacuum bottle (Thermos) can be an excellent choice for an insulating boundary, but even within this vacuum, measures must be taken to prevent heat from *radiating* into the inner colder container by means of photons. To make a long story short, digital inverters make the *best refrigerators in the known universe*, as the effective temperature (from a statistical thermodynamic point of view) of the output signals (degrees of freedom) is close enough to zero to enable these 10^12 - 10^15 computation steps without a single error. Bottom line takeaway: error-free digital computation requires incredibly good refrigerators, but they don't have to measure "temperature" in the same way that your home thermostat measures temperature. --- So let's move on to quantum computation. The first problem we have in building a "quantum refrigerator" is the *Aharonov-Bohm* effect. To make a long story short, it basically says that there is no such thing as a "quantum insulator". A quantum device can "feel" the effect of other nearby quantum devices, and thus these nearby quantum devices can *introduce errors* -- i.e., *heat* -- into our quantum refrigerator wannabe. Thus, *all* of the efforts at building quantum computers to date have been attempting to get as close to 0 degrees Kelvin as possible. Not only is this supercold refrigeration incredibly expensive, the temperatures achieved are still *nowhere near as cold as the effective temperature of a classical digital computing device*. Well, if we can't build an effective insulator, perhaps we can build an extremely good refrigeration engine that can still keep up with the amount of inflowing heat? Here, I will make an analogy with a fiber laser. What is a laser? What is a fiber laser? A laser is traditionally a one-dimensional waveguide with *mirrors* at both ends which perform 2 tasks: * reflect the energy back into the interval between the 2 mirrors; and * define the wavelength of the electromagnetic wave we're trying to construct. There is a close analogy between a laser and a 2-stage digital ring-oscillator: the digital ring-oscillator consists of 2 cross-coupled amplifiers in a positive feedback loop; the amplifiers reflect the signal back into the system, while the *delay* of the amplifiers sets the *period* of the oscillator. Both the laser and the digital ring-oscillator are also *refrigerators*, because they reject all signals except for the ones that respect the reflectors and the period/wavelength. This is why the output from a laser is so "pure". A fiber laser is a laser constructed from a fiber-optic fiber waveguide and 2 mirrors, but this time the mirrors are *Bragg reflectors*, which utilize *diffraction* to reflect the frequencies/wavelengths of interest. BTW, building such Bragg mirrors is not difficult -- simply etch the diffraction pattern into the fiber! So here's my leap of faith for a quantum computer: we may be able to utilize the ideas behind *quantum error-correcting codes* to construct Bragg-like reflectors which should have the capability of wringing the errors/heat from a qubit. If such "reflectors" are efficient enough, our qubits might be able to operate at room temperature. --- Other issues not touched upon in Dyakonov's article: * Unlike classical digital computing elements, quantum computing elements are all *reversible*, so any computations must take a very specific form investigated by Feynman, Fredkin, Bennett, Toffoli, etc. In reversible computation, the heat-producing operation is that of *erasure*, since every bit which is *erased* must exit the system. Although enormous sums have been spent on quantum computational theory, too little has been spent on better understanding of *reversible* computation, and in particular, the efficient conversion of irreversible computations into reversible computations. * No cloning theorem: the most basic operation in a classical digital computer is *copying* a value from one register to another. Due to the no-cloning theorem, this operation is impossible in a quantum computer. In computer software, we have developed logics and type theories for values that can't be copied; these logics are called "linear logics", and "monads" allow the expression of linear objects -- e.g., I/O channels -- in functional programming languages such as Haskell. Once again, too little has been spent on better understanding of these "linear logics" and their relationship with quantum computation. * "Collapse of the wave function". It isn't 100% clear in most discussions of quantum computation what's "in" and what's "out" when the boundary is drawn around the quantum computer. There are several reasons for this. One is the Aharonov-Bohm effect mentioned above, which basically makes it difficult -- if not impossible -- to make a precise separation between the quantum computer and the rest of the world. The second is related to the first: are our human sense organs and brains quantumly entangled with the quantum computation itself? In a quantum -- i.e., reversible -- world, there can be no "collapse of the wave function", because that would entail the destruction of information which is not possible. However, it should be possible to *sort* information into separate channels (but somehow still remember where it came from, so that reversibility is preserved). * The history of quantum computation is remarkably similar to the history of classical *analog* computation -- particularly *linear analog* computation. For approximately a century, engineers attempted to build linear analog devices with greater and greater precision, but were never able to achieve more than a handful of digits of accuracy and reproducibility. No matter how careful the design and implementation, linear analog circuits could never achieve any meaningful depth, as each level introduced additional errors and "distortion", which could not be economically eliminated. It is the frustration of this century of effort that led to the incredibly quick adoption of *digital circuits*, even though a single analog element might have to be replaced by 10/100/1000 digital computing elements for the "same" computation. However, since 1000 digital computing elements could be utilized *without an explosion of error*, and since each of these digital computing elements was *much s impler* and *much cheaper* and *much more reliable* than their analog analogues (I've been wanting to use that phrase for a very long time!), the digital signal processing revolution was born, and the past 70 years has seen the almost complete elimination of analog circuitry from every device and every curriculum. In short, the QC folks are intent on repeating *all* of the mistakes of the analog computer designers. At 12:56 PM 11/21/2018, Hans Havermann wrote:
In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput...
Another rebuttal: https://www.hpcwire.com/2019/01/09/the-case-against-the-case-against-quantum... On Wed, Nov 21, 2018 at 1:57 PM Hans Havermann <gladhobo@bell.net> wrote:
In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput...
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike https://reperiendi.wordpress.com
participants (4)
-
Eugene Salamin -
Hans Havermann -
Henry Baker -
Mike Stay