[math-fun] "Essentially serial" computations for PoW ?
If there are any computational complexity theorists out there, I was wondering if there were any examples of "essentially serial" computations, in which no amount of parallelism (and no amount of memory) can speed it up. I.e., the only speedup comes from a faster CPU clock. Obviously, one can use a Turing Machine to "diagonalize" over all Turing Machines with smaller time & space, but this doesn't say much about access to huge amounts of parallelism. The goal is a Bitcoin-type "proof of work" in which buying more computers won't help at all; hopefully the speedup of special hardware over a random laptop or cellphone would be perhaps 3 orders of magnitude, but not more. Any links/pointers would be welcomed.
This question was discussed at http://stackoverflow.com/questions/806569/whats-the-opposite-of-embarrassing... The most interest example was: The number of women will not reduce the length of pregnancy. On Mon, Aug 4, 2014 at 10:09 PM, Henry Baker <hbaker1@pipeline.com> wrote:
If there are any computational complexity theorists out there, I was wondering if there were any examples of "essentially serial" computations, in which no amount of parallelism (and no amount of memory) can speed it up. I.e., the only speedup comes from a faster CPU clock.
Obviously, one can use a Turing Machine to "diagonalize" over all Turing Machines with smaller time & space, but this doesn't say much about access to huge amounts of parallelism.
The goal is a Bitcoin-type "proof of work" in which buying more computers won't help at all; hopefully the speedup of special hardware over a random laptop or cellphone would be perhaps 3 orders of magnitude, but not more.
Any links/pointers would be welcomed.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Inside the class P is the class NC of "highly parallelizable" problems: those which can be solved with a parallel computer with poly(n) processors, in just poly(log n) time. We assume that the processors can send messages to each other in unit time. Modulo some technical issues of uniformity, this is the same as the class of problems that can be solved by Boolean circuits of width poly(n) and depth poly(log n). Examples of such problems include addition and multiplication of n-digit numbers, matrix multiplication, etc. Amazingly, we don't know that NC is a proper subclass of P! That is, we don't know for sure that there are "essentially serial" problems (complexity people say "inherently sequential") where the total amount of work is polynomial, but even a polynomial number of processors doesn't let us get through all the work radically faster. The NC vs. P problem is probably easier than P vs. NP, but it remains unsolved. In analogy with NP, we say a problem is P-complete if any other problem in P can be "reduced" or translated into it. (We use some notion of reduction less powerful than P, such as logspace computation.) Just as we believe NP-complete problems are outside P, we believe P-complete problems are outside NC. The canonical P-complete problem is Circuit Evaluation: given a circuit of polynomial size and depth, and given the truth values of its inputs, compute its outputs. Obviously we can solve this problem in P by going through the circuit step-by-step. Our belief that this is outside NC is the belief that it is not generally possible to "squash" circuits down to polylog depth without a more-than-polynomial blowup in their width. In physical terms, we can think of this as analogous with chaotic systems: we think there are systems that can be simulated step by step, but where we can't skip over vast sections of their history in spacetime. Indeed, several people (including yours truly) have proved that various physical systems like the Ising model, sandpiles, and growth models are P-complete to predict. Here are some preprints: http://arxiv.org/abs/cond-mat/9403006 http://arxiv.org/abs/cond-mat/9701118 http://arxiv.org/abs/cond-mat/9808183 Best, Cris On Aug 4, 2014, at 8:09 PM, Henry Baker <hbaker1@pipeline.com> wrote:
If there are any computational complexity theorists out there, I was wondering if there were any examples of "essentially serial" computations, in which no amount of parallelism (and no amount of memory) can speed it up. I.e., the only speedup comes from a faster CPU clock.
Obviously, one can use a Turing Machine to "diagonalize" over all Turing Machines with smaller time & space, but this doesn't say much about access to huge amounts of parallelism.
The goal is a Bitcoin-type "proof of work" in which buying more computers won't help at all; hopefully the speedup of special hardware over a random laptop or cellphone would be perhaps 3 orders of magnitude, but not more.
Any links/pointers would be welcomed.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Firstly, note that `essentially serial' proofs of work, by definition, do not conform to the `one vote per CPU' paradigm required for a Bitcoin protocol. However, since you asked the question, integer factorisation seems to have this property. Specifically, it's painfully slow to use anything other than the General Number Field Sieve, and that's both sequential and complicated -- GPUs and ASICs won't help you. Also, it can be checked really quickly (multiplication takes O(n log n log log n) time). Sincerely, Adam P. Goucher
Sent: Tuesday, August 05, 2014 at 3:09 AM From: "Henry Baker" <hbaker1@pipeline.com> To: math-fun@mailman.xmission.com Subject: [math-fun] "Essentially serial" computations for PoW ?
If there are any computational complexity theorists out there, I was wondering if there were any examples of "essentially serial" computations, in which no amount of parallelism (and no amount of memory) can speed it up. I.e., the only speedup comes from a faster CPU clock.
Obviously, one can use a Turing Machine to "diagonalize" over all Turing Machines with smaller time & space, but this doesn't say much about access to huge amounts of parallelism.
The goal is a Bitcoin-type "proof of work" in which buying more computers won't help at all; hopefully the speedup of special hardware over a random laptop or cellphone would be perhaps 3 orders of magnitude, but not more.
Any links/pointers would be welcomed.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Any strange attractive orbit computed can only be computed serially AFAIK, longer times would require great accuracy so in practical terms the method would probably need to use considerably greater precision than double. On 5 Aug 2014, at 03:09, Henry Baker wrote:
If there are any computational complexity theorists out there, I was wondering if there were any examples of "essentially serial" computations, in which no amount of parallelism (and no amount of memory) can speed it up. I.e., the only speedup comes from a faster CPU clock.
Obviously, one can use a Turing Machine to "diagonalize" over all Turing Machines with smaller time & space, but this doesn't say much about access to huge amounts of parallelism.
The goal is a Bitcoin-type "proof of work" in which buying more computers won't help at all; hopefully the speedup of special hardware over a random laptop or cellphone would be perhaps 3 orders of magnitude, but not more.
Any links/pointers would be welcomed.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
participants (5)
-
Adam P. Goucher -
Cris Moore -
David Makin -
Henry Baker -
W. Edwin Clark