Re: [math-fun] The Case Against Quantum Computing
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it. I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written. —Dan Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field: https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
Me too, I think that the article is quite interesting, roughly there are too many parameters at the atomic level to consider, too complex, it won't work. Best regards, Simon Plouffe Le 2018-11-21 à 22:20, Dan Asimov a écrit :
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Dyakonov seems to overlook Feynman's point that Nature does the computations of how quantum systems like amino acid chains behave and the behavior is quite consistent in spite of having 10^3000 variables. Holevo's theorem shows that an n-bit quantum computer cannot yield more that n bits of information, so in a sense those 2^N are almost all going to cancel out when you look for the answer. I suggest reading Scott Aaronson's blog. He has discussed the problems of quantum computing many times and written a very nice book "Quantum Computing Since Democritus". Brent On 11/21/2018 1:20 PM, Dan Asimov wrote:
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I think Dyakonov's article is misleading. It contains claims such as: """ So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed! """ which ignores the fact that one of the earliest things that was considered is how to be able to cope with approximation errors. For instance, a popular choice of metric is that the 'error' between a unitary U and an approximation U' is: max {arccos(Uv . U'v) : v is a unit vector } at which point it follows that errors are sub-additive, i.e. the error of approximating a composition UV with U'V' is at most the sum of the errors introduced in each of the two approximations. The same subadditivity holds for tensor products as well, so introducing more qubits doesn't cause the approximation error of a single gate to increase. In fact, this reasoning predates the Solovay-Kitaev Theorem, which shows that with a limited set of 'basic gates' you can approximate any given unitary to within an error of epsilon using at most polylog(1/epsilon) of your basic gates. Peter Shor claims that his algorithm will work when the gate approximation errors are on the order of 10^-4: https://mathoverflow.net/questions/302492/on-mathematical-arguments-against-... (Note that this is a different kind of error to the discrete 'suddenly a qubit gets messed up' errors that are addressed by the theory of fault-tolerance. So it's probably better to use different terms such as 'inexactitude' and 'fault' that make this distinction more clearly.) In general, the crux of the article seems to be Objection 4 on Scott Aaronson's (very well-written and argued) list of popular objections: https://www.scottaaronson.com/democritus/lec14.html Every argument in Dyakonov's article seems to also apply to the probabilistic classical computing model. In order to be a valid criticism against quantum computers, it would need to explain why unitary matrices are inherently different from doubly stochastic matrices. Best wishes, Adam P. Goucher
Sent: Wednesday, November 21, 2018 at 9:20 PM From: "Dan Asimov" <dasimov@earthlink.net> To: math-fun <math-fun@mailman.xmission.com> Subject: Re: [math-fun] The Case Against Quantum Computing
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Nov 21, 2018, at 3:05 PM, Adam P. Goucher <apgoucher@gmx.com> wrote:
Every argument in Dyakonov's article seems to also apply to the probabilistic classical computing model. In order to be a valid criticism against quantum computers, it would need to explain why unitary matrices are inherently different from doubly stochastic matrices.
I agree. This point is often lost (and Scott Aaronson is right to emphasize it): the “state” of classical probabilistic computers with n bits is also a 2^n-dimensional vector. - Cris
Side comment --- in the Scott Aaronson reference there's the statement that it's possible to build a pseudorandom number generator from *any* one way function, which can be tracked to here, https://www.scottaaronson.com/democritus/lec8.html http://citeseer.ist.psu.edu/340126.html which also states that if the one way function is a permutation then the proof is much easier and due to Yao in 1982. Andres. On 11/21/18 14:05 , Adam P. Goucher wrote:
I think Dyakonov's article is misleading. It contains claims such as:
""" So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed! """
which ignores the fact that one of the earliest things that was considered is how to be able to cope with approximation errors. For instance, a popular choice of metric is that the 'error' between a unitary U and an approximation U' is:
max {arccos(Uv . U'v) : v is a unit vector }
at which point it follows that errors are sub-additive, i.e. the error of approximating a composition UV with U'V' is at most the sum of the errors introduced in each of the two approximations.
The same subadditivity holds for tensor products as well, so introducing more qubits doesn't cause the approximation error of a single gate to increase.
In fact, this reasoning predates the Solovay-Kitaev Theorem, which shows that with a limited set of 'basic gates' you can approximate any given unitary to within an error of epsilon using at most polylog(1/epsilon) of your basic gates.
Peter Shor claims that his algorithm will work when the gate approximation errors are on the order of 10^-4:
https://mathoverflow.net/questions/302492/on-mathematical-arguments-against-...
(Note that this is a different kind of error to the discrete 'suddenly a qubit gets messed up' errors that are addressed by the theory of fault-tolerance. So it's probably better to use different terms such as 'inexactitude' and 'fault' that make this distinction more clearly.)
In general, the crux of the article seems to be Objection 4 on Scott Aaronson's (very well-written and argued) list of popular objections:
https://www.scottaaronson.com/democritus/lec14.html
Every argument in Dyakonov's article seems to also apply to the probabilistic classical computing model. In order to be a valid criticism against quantum computers, it would need to explain why unitary matrices are inherently different from doubly stochastic matrices.
Best wishes,
Adam P. Goucher
Sent: Wednesday, November 21, 2018 at 9:20 PM From: "Dan Asimov" <dasimov@earthlink.net> To: math-fun <math-fun@mailman.xmission.com> Subject: Re: [math-fun] The Case Against Quantum Computing
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Dyakonov sounds to me like this guy: https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirvana-185306 Speaking of Aaronson, he's already written a reply to one of Dyakonov's earlier articles saying the same thing: https://www.scottaaronson.com/blog/?p=1211 Regarding "Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!", I see that on the arxiv there are 2215 papers on quantum error correction and 1567 on topological quantum computation (where the outcome depends only on the topology of braided anyons, which relatively large perturbations won't change). On Wed, Nov 21, 2018 at 3:23 PM Adam P. Goucher <apgoucher@gmx.com> wrote:
I think Dyakonov's article is misleading. It contains claims such as:
""" So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed! """
which ignores the fact that one of the earliest things that was considered is how to be able to cope with approximation errors. For instance, a popular choice of metric is that the 'error' between a unitary U and an approximation U' is:
max {arccos(Uv . U'v) : v is a unit vector }
at which point it follows that errors are sub-additive, i.e. the error of approximating a composition UV with U'V' is at most the sum of the errors introduced in each of the two approximations.
The same subadditivity holds for tensor products as well, so introducing more qubits doesn't cause the approximation error of a single gate to increase.
In fact, this reasoning predates the Solovay-Kitaev Theorem, which shows that with a limited set of 'basic gates' you can approximate any given unitary to within an error of epsilon using at most polylog(1/epsilon) of your basic gates.
Peter Shor claims that his algorithm will work when the gate approximation errors are on the order of 10^-4:
https://mathoverflow.net/questions/302492/on-mathematical-arguments-against-...
(Note that this is a different kind of error to the discrete 'suddenly a qubit gets messed up' errors that are addressed by the theory of fault-tolerance. So it's probably better to use different terms such as 'inexactitude' and 'fault' that make this distinction more clearly.)
In general, the crux of the article seems to be Objection 4 on Scott Aaronson's (very well-written and argued) list of popular objections:
https://www.scottaaronson.com/democritus/lec14.html
Every argument in Dyakonov's article seems to also apply to the probabilistic classical computing model. In order to be a valid criticism against quantum computers, it would need to explain why unitary matrices are inherently different from doubly stochastic matrices.
Best wishes,
Adam P. Goucher
Sent: Wednesday, November 21, 2018 at 9:20 PM From: "Dan Asimov" <dasimov@earthlink.net> To: math-fun <math-fun@mailman.xmission.com> Subject: Re: [math-fun] The Case Against Quantum Computing
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike https://reperiendi.wordpress.com
Dyakonov writes: "With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!” This reminds me of Reagan’s quote that there is no word in Russian for freedom… It’s true that Hilbert space is high-dimensional, and an n-qubit system is specified by 2^n complex amplitudes. But as Brent quotes Feynman, nature has no problem keeping track of them all. And it’s not true that quantum algorithms require us to control or observe this many variables: we only observe (squares of) linear combinations of them. Noise and precision gets discussed in quantum computing quite a bit. The “fidelity” of a quantum gate is the inner product between the resulting state and the intended one. Typical gate fidelities now are 0.99, which roughly means we can do 100 quantum gates before the calculation fails. It’s true that we need to get much closer to 1 before we can do quantum error correction and sustain computations of arbitrary length, but I think this is an engineering problem, not a fundamental one. In the meantime, we already have devices which are pushing beyond our ability to simulate or predict them classically. These devices fall far short ot implementing e.g. Shor’s factoring algorithm, but they are radically new physical objects that deserve study in their own right. - Cris
On Nov 21, 2018, at 2:20 PM, Dan Asimov <dasimov@earthlink.net> wrote:
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (7)
-
Adam P. Goucher -
Andres Valloud -
Brent Meeker -
Cris Moore -
Dan Asimov -
Mike Stay -
Simon Plouffe