=Neil Sloane <njasloane@gmail.com> "Deep Learning for Symbolic Mathematics" looks extremely relevant. ... "Building the Mathematical Library of the Future" which also seems relevant
Agreed there's a lot of neat new initiatives being explored these days! We seem to be entering an exciting era that offers an increased variety of paradigms to apply. For example I'm struck by how very different the above two approaches are from each other: A. Traditionally, proofs are finely hand-crafted artifacts. The "Library" type efforts provide new power tools in the context of a familiar workbench. It leverages 21st century "mass computing", but within a workstyle that would not seem alien to a 19th century mathematician. B. In contrast, the "Deep Learning" type stuff has a whiff of post-modern voodoo. Their hack views integration as just a kind of dialect translation, sort of like converting Spanish to Portuguese. It works because the accuracy can be scored by differentiating the output translation, and the error fed back to drive learning. It seems like a magic trick because, in contrast to classical proof derivation, the intermediate transformational steps become untethered from the original mathematical semantics -- hence inscrutable to human understanding -- but nonetheless it somehow still arrives at the right answer. How might we cast "guessing a formula for a sequence" as a language translation pattern, or other such task amenable to machine-learning a la mode? PS for a good time, check out the unreasonable effectiveness of "word2vec", where words get mapped into real 50-vectors that then turn out to closely satisfy translational (!) equalities such as QUEEN = KING - WOMAN + MAN WARSAW = PARIS - FRANCE + POLAND even though the 50 component dimensions are individually unintelligible...