Re: [math-fun] largest ever proof
Differences between a computational maths proof and (e.g.) a Lucas-Lehmer test to find a Mersenne Prime: There are some 'givens' in each case: - (an alphabet of symbols for input and output) - axioms - rules of inference In both cases, it is assumed - at least in an informal, social sense - that safeguards have been professionally used to ensure that at least the axioms are reasonable and the rules of inference have been properly used. However, the more this can be made manifest, the better. Whatever happened re HOTT? Have there not been cases of two teams respectively proving 'P' and 'not P' and struggling to reconcile their differences. I vaguely remember a Japanese team being involved in one case. There have been computational errors in the computation of Lucas-Lehmer residues. There have been design flaws in INTEL FP processors. The Mersenne work I was associated with in the '70s-'80s proved its own program (the first FFT/Mersenne program) by reproducing the available LLT-residues (or not, when it turned out that the original ones were wrong). At Oxford in the '60s, the computational protocol was to produce the same results twice from a number of runs (preferably just 2) on the EE KDF9 computer. No doubt GIMPS could comment on the frequency of erroneous results submitted to 'central' but I think the programs are much more robust these days. More than one algorithm and more than one computer architecture is available. Guy
My quibble was more about calling the 200T Pythagorean 2-Coloring Proof the "largest proof". The size of a proof is very adjustable. For Mersenne prime proofs, the "checkable artifact" might list all the intermediate steps in the chain of squarings, or every hundredth. You might choose to only include the X values, or to also include the full details of squaring X, and the reduction mod P. The size of the artifact depends on how capable the checker is: If it's limited to small Turing-machine-like steps, the proof must be more detailed -- and bigger. The P2C proof involves a lot of computing. The 200T proof might be much bigger or smaller, depending on the amount of guidance it provides to a checker. Rich ---------------- Quoting Guy Haworth <g.haworth@reading.ac.uk>:
Differences between a computational maths proof and (e.g.) a Lucas-Lehmer test to find a Mersenne Prime:
There are some 'givens' in each case:
- (an alphabet of symbols for input and output) - axioms - rules of inference
In both cases, it is assumed - at least in an informal, social sense - that safeguards have been professionally used to ensure that at least the axioms are reasonable and the rules of inference have been properly used.
However, the more this can be made manifest, the better. Whatever happened re HOTT?
Have there not been cases of two teams respectively proving 'P' and 'not P' and struggling to reconcile their differences. I vaguely remember a Japanese team being involved in one case.
There have been computational errors in the computation of Lucas-Lehmer residues. There have been design flaws in INTEL FP processors.
The Mersenne work I was associated with in the '70s-'80s proved its own program (the first FFT/Mersenne program) by reproducing the available LLT-residues (or not, when it turned out that the original ones were wrong).
At Oxford in the '60s, the computational protocol was to produce the same results twice from a number of runs (preferably just 2) on the EE KDF9 computer.
No doubt GIMPS could comment on the frequency of erroneous results submitted to 'central' but I think the programs are much more robust these days. More than one algorithm and more than one computer architecture is available.
Guy
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (2)
-
Guy Haworth -
rcs@xmission.com