Dyakonov sounds to me like this guy: https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirvana-185306 Speaking of Aaronson, he's already written a reply to one of Dyakonov's earlier articles saying the same thing: https://www.scottaaronson.com/blog/?p=1211 Regarding "Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!", I see that on the arxiv there are 2215 papers on quantum error correction and 1567 on topological quantum computation (where the outcome depends only on the topology of braided anyons, which relatively large perturbations won't change). On Wed, Nov 21, 2018 at 3:23 PM Adam P. Goucher <apgoucher@gmx.com> wrote:
I think Dyakonov's article is misleading. It contains claims such as:
""" So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed! """
which ignores the fact that one of the earliest things that was considered is how to be able to cope with approximation errors. For instance, a popular choice of metric is that the 'error' between a unitary U and an approximation U' is:
max {arccos(Uv . U'v) : v is a unit vector }
at which point it follows that errors are sub-additive, i.e. the error of approximating a composition UV with U'V' is at most the sum of the errors introduced in each of the two approximations.
The same subadditivity holds for tensor products as well, so introducing more qubits doesn't cause the approximation error of a single gate to increase.
In fact, this reasoning predates the Solovay-Kitaev Theorem, which shows that with a limited set of 'basic gates' you can approximate any given unitary to within an error of epsilon using at most polylog(1/epsilon) of your basic gates.
Peter Shor claims that his algorithm will work when the gate approximation errors are on the order of 10^-4:
https://mathoverflow.net/questions/302492/on-mathematical-arguments-against-...
(Note that this is a different kind of error to the discrete 'suddenly a qubit gets messed up' errors that are addressed by the theory of fault-tolerance. So it's probably better to use different terms such as 'inexactitude' and 'fault' that make this distinction more clearly.)
In general, the crux of the article seems to be Objection 4 on Scott Aaronson's (very well-written and argued) list of popular objections:
https://www.scottaaronson.com/democritus/lec14.html
Every argument in Dyakonov's article seems to also apply to the probabilistic classical computing model. In order to be a valid criticism against quantum computers, it would need to explain why unitary matrices are inherently different from doubly stochastic matrices.
Best wishes,
Adam P. Goucher
Sent: Wednesday, November 21, 2018 at 9:20 PM From: "Dan Asimov" <dasimov@earthlink.net> To: math-fun <math-fun@mailman.xmission.com> Subject: Re: [math-fun] The Case Against Quantum Computing
I am a complete ignoramus about quantum computing, so this article roughly doubled my knowledge of it.
I will say this: That article by Mikhail Dyakonov is *extremely* clear and well-written.
—Dan
Hans Havermann schrieb: ----- In the IEEE Spectrum last week, Mikhail Dyakonov presented his overview of the field:
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-comput... -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike https://reperiendi.wordpress.com