From: "Joshua Zucker" <joshua.zucker@gmail.com>
Maybe my experience is too many years out of date, but I find computer-understandable proofs to make them much less accessible in general. For instance, in geometry you need to use something closer to Hilbert's approach, and spend (waste?) lots of time on proving that this point is between those two, or this ray is between those two, so you can add or subtract distances or angles. Of course, it's essential to prove those things to make proofs rigorous, but rigor and accessibility are quite different things, and I think negatively correlated.
The problem is similar to that of writing good computer programs. But they are like cousins long out of touch. With programming, I think clarity correlates with rigor--but that's in a situation where both are up to the style or preference of the programmer. As with all programming disciplines, a lot depends on whether one has got religion oneself or is being dragged into it. (I could see wikiproof politics going either or both ways.) Math at least has the advantage that proofs tend to be aired by publication. What I mean by rigor is that the programmer worked to be sure they covered all the cases. I think thoroughness correlates with clarity because un-clarity provides places where bugs can hide--it's a bit different with computer validation, but probably clarifying (better organization) is one of the best ways to debug (i.e. satisfy a verifier about) proofs. There is an old study where one group of programmers was told to write a program to be clear, another group, to be fast. The programs written to be clear turned out at least as fast as the "fast" ones. The Qedeq language ( qedeq.org ) looks good in concept with its syntactic sugar and libraries. There could be a form of stretchtext where the justifications of some steps are hidden but clickable (or selectively hideable). My impression is that mathematicians think of a lemma as either a special move like a fancy gymnastic dismount, or a sign that a problem is a really big one. Early in the history of computers subroutines were considered costly and thought of in similar ways. Whereas lately one is encouraged to always break things down. In some languages (e.g. forth) primitives and subroutines are on equal footing and it's considered a sin to write any program longer than a single sentence without breaking it into smaller subroutines (imagine a single dictionary definition going on for a page). Prolog programs that I've seen tend to have very short statements. Long proofs read like the unstructured programs of yore.
Working with automated proof checking sounds painful.
"Double-entry bookkeeping? Audit trails? I don't likes the sound o' that!" Just the fact that it seems awkward is an interesting symptom that makes it worth persuing. It's as if we lived in an era where long calculations were carried out by rare experts' hands. Math is very steampunk, including all the funky typography and ancient alphabets. Calculemus, I say. http://www.tiac.net/~sw/2003/08/calculemus.html --Steve