[math-fun] faster and faster
I wasn't able to attend Gathering for Gardner 9, but my student Linda Zayas-Palmer did, and she presented there a joint paper called "Carrying On with Infinite Decimals" (http://jamespropp.org/carrying.pdf); I'd be interested in people's comments (since at some point I plan to publish this in one of the Gathering for Gardner volumes). One comment I've already received can be paraphrased as follows: "You describe these computations with infinite decimals as taking place in ordinary time, so that even though each individual digit eventually stabilizes, you have to wait till time omega to be able to read off the answer. Why not imagine a computer in which each carry in the 10^{-n} position takes 10^{-n} seconds, so that the computation finishes in finite time?" I was wondering if anyone had seen any proposals for (unrealistic) models of computation along these lines. One needs to take some care to make sure that the state of the system remains well-defined even at limit-times. For instance, if one switches a light switch on and off at times 1/2, 3/4, 7/8, ..., the state of the light at time 1 is undefined. Still, it seems that even if one imposes constraints to avoid the aforementioned "light switch problem", one can get idealized computers of this kind to do all kinds of things ordinary computers can't do, such as solve the halting problem. There must be existing literature on computers that can be made to run faster and faster. Jim Propp
I was wondering if anyone had seen any proposals for (unrealistic) models of computation along these lines.
One needs to take some care to make sure that the state of the system remains well-defined even at limit-times. For instance, if one switches a light switch on and off at times 1/2, 3/4, 7/8, ..., the state of the light at time 1 is undefined.
Still, it seems that even if one imposes constraints to avoid the aforementioned "light switch problem", one can get idealized computers of this kind to do all kinds of things ordinary computers can't do, such as solve the halting problem.
If the computer is operating in discreet time, one way to model this is instead of having time take a value in the positive integers, have it take values in some higher ordinal. Non-well-ordered time seems to inevitably produce paradoxes like the light-switch paradox. If after taking steps at 1/2, 3/4, 7/8, 15/16... you can say that anything that gets to state and stays in that state has a well-defined state at time 1. But if the computer now takes steps at times ..., 17/16, 9/8, 5/4, 3/2, 2, It seems hard to figure out how to define the result of the computation at time 2, even given a well-defined transition function and defined state at time 1. Andy
There must be existing literature on computers that can be made to run faster and faster.
Jim Propp
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com
At 08:31 PM 4/7/2010, James Propp wrote:
There must be existing literature on computers that can be made to run faster and faster.
There is -- it's called computing with "oracles". Assume that you have an "oracle" (like the Oracle of Delphi, get it?) which can answer a complex question in one step, what is the computational complexity of a given task? I don't recall ever having seen an oracle that does infinite carry-lookahead or other infinite precision arithmetic in one step. I would first try to convince myself that it isn't such a powerful operation that computing with it becomes trivial. Vaughn Pratt has an interesting paper from approx 1970 about computing with infinite bitvectors (something about "bitvector machines", perhaps?) in one step. He shows that it enables certain problems to be speeded up quite a bit. I'm sure that this paper is on his web site somewhere. Pratt also discussed solving combinatorial problems of deterministic complexity O(2^n) in polynomial time by utilizing Moore's Law. You simply wait long enough to allow Moore's Law to work its magic before building a machine which is large enough & fast enough to solve the problem quickly. The constant is pretty large, but we've been effectively using this technique on combinatorial problems for the past 40 years. I don't know if Pratt ever wrote down this Moore's Law Speedup algorithm in a paper, though. As Gosper well knows, there are "siphon" algorithms for infinite precision real arithmetic that compute only enough bits to answer the particular question posed. Using these techniques, one can write down the answer to a question, but actually obtaining the bits of the answer may be somewhat more difficult.
On Wed, Apr 7, 2010 at 8:31 PM, James Propp <jpropp@cs.uml.edu> wrote:
One comment I've already received can be paraphrased as follows: "You describe these computations with infinite decimals as taking place in ordinary time, so that even though each individual digit eventually stabilizes, you have to wait till time omega to be able to read off the answer. Why not imagine a computer in which each carry in the 10^{-n} position takes 10^{-n} seconds, so that the computation finishes in finite time?"
I was wondering if anyone had seen any proposals for (unrealistic) models of computation along these lines.
Yes; there's a whole literature on "unconventional computation". http://en.wikipedia.org/wiki/Unconventional_computing The kind you describe is often set in terms of an observer falling into a black hole; an infinite computation run by a computer left outside the black hole will "finish" before the observer crosses the Schwarzchild radius. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
| I was wondering if anyone had seen any proposals for | (unrealistic) models of computation along these lines. ... or alternatively doing addition and subtraction -- as Gauss proposed in a letter to Schumacher -- from left to right! See http://gdz.sub.uni-goettingen.de/dms/load/toc/?PPN=PPN236060120 GAUSS an SCHUMACHER, 3. Okt. 1844 - Addieren und Subtrahieren von links nach rechts. p. 38. Christian Siebeneicher.
participants (5)
-
Andy Latto -
Henry Baker -
James Propp -
Mike Stay -
sieben@math.uni-bielefeld.de