[math-fun] Using AI to help Superseeker guess formulas for sequences
Dear Math Fun: I keep coming across sequences that badly need a formula, possibly expressing them in terms of some known sequence. A337641 and A337655 are two recent examples. Why not use AI to strengthen Superseeker?, I keep saying There is a talk on Monday that might be relevant: This announcement just arrived in email: Springer Nature <springernature@newsletter.springernature.com> George Em Karniadakis (Brown University and MIT)
From PINNs to DeepOnets: Approximating functions, functionals, and operators using deep neural networks
Throughout October, authors from the new journal SN Partial Differential Equations and Applications will be hosting a series of free webinars on topics related to partial differential equations. Up next is a presentation by George Em Karniadakis of Brown University and MIT on approximating functions using deep neural networks. Join us for a lively discussion on October 5th at 9am EST, 3pm CET. I signed up, but AI and PDEs are not my field. If anyone is interested, please watch too
Registration/zoom link https://zoom.us/meeting/register/tJ0uf-iopzkvGdJJpq42h5f7wOqUcKtCc7y7 (also webinar schedule from this page: https://www.springer.com/journal/42985/updates/18235540 )
On Oct 3, 2020, at 1:46 PM, Neil Sloane <njasloane@gmail.com> wrote:
Dear Math Fun: I keep coming across sequences that badly need a formula, possibly expressing them in terms of some known sequence. A337641 and A337655 are two recent examples.
Why not use AI to strengthen Superseeker?, I keep saying
There is a talk on Monday that might be relevant: This announcement just arrived in email:
Springer Nature <springernature@newsletter.springernature.com> George Em Karniadakis (Brown University and MIT) From PINNs to DeepOnets: Approximating functions, functionals, and operators using deep neural networks
Throughout October, authors from the new journal SN Partial Differential Equations and Applications will be hosting a series of free webinars on topics related to partial differential equations.
Up next is a presentation by George Em Karniadakis of Brown University and MIT on approximating functions using deep neural networks. Join us for a lively discussion on October 5th at 9am EST, 3pm CET. I signed up, but AI and PDEs are not my field.
If anyone is interested, please watch too _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
=Neil Sloane Why not use AI to strengthen Superseeker?, I keep saying
Indeed. Want to mention this again, for anyone who might pursue this further: "Deep Learning for Symbolic Mathematics" https://arxiv.org/pdf/1912.01412.pdf Not necessarily the answer, but perhaps it may inspire something interesting?
Marc, thanks for pointing out this article (for the second time, I know, but I overlooked it the first time): Lample and Charton, (from FaceBook AI Research) "Deep Learning for Symbolic Mathematics" https://arxiv.org/pdf/1912.01412.pdf It looks extremely relevant. When I've digested it I might talk to them about applying their methods to number sequences. By the way, Eric Desbiaux pointed out this article: Kevin Hartnett, in Quanta Magazine Oct 01 2020 https://www.quantamagazine.org/building-the-mathematical-library-of-the-futu... Building the Mathematical Library of the Future which also seems relevant Best regards Neil Neil J. A. Sloane, President, OEIS Foundation. 11 South Adelaide Avenue, Highland Park, NJ 08904, USA. Also Visiting Scientist, Math. Dept., Rutgers University, Piscataway, NJ. Phone: 732 828 6098; home page: http://NeilSloane.com Email: njasloane@gmail.com On Sun, Oct 4, 2020 at 1:09 PM Marc LeBrun <mlb@well.com> wrote:
=Neil Sloane Why not use AI to strengthen Superseeker?, I keep saying
Indeed. Want to mention this again, for anyone who might pursue this further:
"Deep Learning for Symbolic Mathematics" https://arxiv.org/pdf/1912.01412.pdf
Not necessarily the answer, but perhaps it may inspire something interesting?
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
=Neil Sloane <njasloane@gmail.com> "Deep Learning for Symbolic Mathematics" looks extremely relevant. ... "Building the Mathematical Library of the Future" which also seems relevant
Agreed there's a lot of neat new initiatives being explored these days! We seem to be entering an exciting era that offers an increased variety of paradigms to apply. For example I'm struck by how very different the above two approaches are from each other: A. Traditionally, proofs are finely hand-crafted artifacts. The "Library" type efforts provide new power tools in the context of a familiar workbench. It leverages 21st century "mass computing", but within a workstyle that would not seem alien to a 19th century mathematician. B. In contrast, the "Deep Learning" type stuff has a whiff of post-modern voodoo. Their hack views integration as just a kind of dialect translation, sort of like converting Spanish to Portuguese. It works because the accuracy can be scored by differentiating the output translation, and the error fed back to drive learning. It seems like a magic trick because, in contrast to classical proof derivation, the intermediate transformational steps become untethered from the original mathematical semantics -- hence inscrutable to human understanding -- but nonetheless it somehow still arrives at the right answer. How might we cast "guessing a formula for a sequence" as a language translation pattern, or other such task amenable to machine-learning a la mode? PS for a good time, check out the unreasonable effectiveness of "word2vec", where words get mapped into real 50-vectors that then turn out to closely satisfy translational (!) equalities such as QUEEN = KING - WOMAN + MAN WARSAW = PARIS - FRANCE + POLAND even though the 50 component dimensions are individually unintelligible...
On 10/5/20 12:01, Marc LeBrun wrote:
It seems like a magic trick because, in contrast to classical proof derivation, the intermediate transformational steps become untethered from the original mathematical semantics -- hence inscrutable to human understanding -- but nonetheless it somehow still arrives at the right answer.
... most of the time.
On 05/10/2020 20:01, Marc LeBrun wrote:
It seems like a magic trick because, in contrast to classical proof derivation, the intermediate transformational steps become untethered from the original mathematical semantics -- hence inscrutable to human understanding -- but nonetheless it somehow still arrives at the right answer.
For all we know, the same is true of the internals of what a human mathematician's brain does when doing mathematics. I expect the first genuinely successful pure mathematics AI systems to consist of (1) some weird inscrutable black-box neural-network sort of thing _proposing_ solutions, proof steps or tactics, etc., and (2) something completely formal that only ever does valid operations but has "no pretentions whatever to originate anything". #1 tells #2 what to try, #2 does it without making mistakes. Again, human mathematicians do that. Look at a problem, try some examples, think "hmm, the answer might be X and there might be a proof by induction" -- and then try to fill in the details in more or less rigorous form. Until that last step is done there's always some danger that your intuition is leading you astray. -- g
participants (4)
-
Andres Valloud -
Gareth McCaughan -
Marc LeBrun -
Neil Sloane