Differences between a computational maths proof and (e.g.) a Lucas-Lehmer test to find a Mersenne Prime: There are some 'givens' in each case: - (an alphabet of symbols for input and output) - axioms - rules of inference In both cases, it is assumed - at least in an informal, social sense - that safeguards have been professionally used to ensure that at least the axioms are reasonable and the rules of inference have been properly used. However, the more this can be made manifest, the better. Whatever happened re HOTT? Have there not been cases of two teams respectively proving 'P' and 'not P' and struggling to reconcile their differences. I vaguely remember a Japanese team being involved in one case. There have been computational errors in the computation of Lucas-Lehmer residues. There have been design flaws in INTEL FP processors. The Mersenne work I was associated with in the '70s-'80s proved its own program (the first FFT/Mersenne program) by reproducing the available LLT-residues (or not, when it turned out that the original ones were wrong). At Oxford in the '60s, the computational protocol was to produce the same results twice from a number of runs (preferably just 2) on the EE KDF9 computer. No doubt GIMPS could comment on the frequency of erroneous results submitted to 'central' but I think the programs are much more robust these days. More than one algorithm and more than one computer architecture is available. Guy