True, but TM complexity scales extremely rapidly with time and/or space (amount of tape), so even "approximations" to large amounts of time and space would require googols of universes to compute. So undecidability is pretty smooth/continuous in those dimensions. This complexity scaling is far above and beyond the P=NP issue, which gets you only one exponential -- at most. 100GBytes of RAM on big server machines is quite common. That's quite enough "tape"/"space" for serious exponential stacking. At 06:10 PM 3/1/2018, Mike Speciner wrote:
We don't build Turing machines. We build finite state machines, admittedly with very large numbers of states. So, technically, although not practically, undecidability doesn't apply.
--ms
On 01-Mar-18 20:17, Henry Baker wrote:
By "consequential", I think that you may mean "consequential" to a large fraction of the population -- e.g., electrical engineers and physicists are required to work with complex numbers. Consequential enough to include in high school curricula.
I'm still waiting for undecidable/non-Turing-computable to make its way into the bulk of society; a lot of the incredibly stupid comments by politicians might be avoided if they had had any education in the *limits* of logical systems and mathematics.
For example, no amount of "nerding harder" will enable computer scientists to design a computer such that an arbitrary input won't cause the computer do something "bad".
It took physicists many centuries to convince a large fraction of the population about the impossibility of perpetual motion machines, so perhaps we may be facing the same thing here.
Curiously, we have *more* trust in undecidability than we have in the impossibility of perpetual motion, because the "laws" of physics are still empirical facts, which might be destroyed with the very next experiment, while undecidability is a function of logic itself, and requires no reference to the "real world".
Regarding breakthroughs that are not yet known even to the best of mathematicians, I would say that things related to P=NP and cryptography are pretty high on the list. For example, we are close to trusting cryptography hash functions with all the money in the world (literally!), yet we have no *proof* that these functions are hard to invert or hard to find collisions for. With this much crypto currency in circulation, there's obviously no need for a "Millennium Prize", since a constructive counter-example would produce the richest person in the world!
With the recent advances in AI, we might also ponder the "end of mathematics", but not for the usually suggested reason. AI could end mathematics -- not by doing mathematics better, but by doing enough other things better that no one needs mathematicians anymore. This is where we seem to be headed with chess and Go; once machines are so much better, everyone loses interest in the games so no one plays anymore.
At 03:47 PM 3/1/2018, Bernie Cosell wrote:
i know this is vague, but my wife was reading about the history of i and wondered what the next great fundamental conceptual breakthrough might be. it took something like 1500 years for complex numbers to get understood and now they're used everywhere. it's now been about 250 years and she was wondering what might be the next breakthrough be that could be as consequential as complex numbers are.