On 05/10/2020 20:01, Marc LeBrun wrote:
It seems like a magic trick because, in contrast to classical proof derivation, the intermediate transformational steps become untethered from the original mathematical semantics -- hence inscrutable to human understanding -- but nonetheless it somehow still arrives at the right answer.
For all we know, the same is true of the internals of what a human mathematician's brain does when doing mathematics. I expect the first genuinely successful pure mathematics AI systems to consist of (1) some weird inscrutable black-box neural-network sort of thing _proposing_ solutions, proof steps or tactics, etc., and (2) something completely formal that only ever does valid operations but has "no pretentions whatever to originate anything". #1 tells #2 what to try, #2 does it without making mistakes. Again, human mathematicians do that. Look at a problem, try some examples, think "hmm, the answer might be X and there might be a proof by induction" -- and then try to fill in the details in more or less rigorous form. Until that last step is done there's always some danger that your intuition is leading you astray. -- g