the recent amazing successes of "model free" neural net algorithms (alpha
go, outstanding language translation, ultra-realistic sounding voice
synthesis, etc) makes me wonder whether somehow these techniques eventually
could be, or perhaps already have been, applied to the creation of new and
interesting mathematics. if things were to proceed along the lines of the
progress to date on other problems, someone might possibly start modestly
from a large training set of data (say 10,000 first year calculus exams,
together with their grades), and somehow train a neural network to
recognize the difference between incorrect and correct calculus
question-answering. (i am just making this up). or, perhaps take textbooks
and their answer keys. or polya's problems and someone's answers.
it seems to still be far away from something likely to succeed, somehow, in
generating a proof of a hard theorem or even creating say a simple paper i
might want to read, but i had a similar reaction to what I was reading
about alpha go in its early days.
at the very least, a useful virtual mathematical interlocutor seems
possibly feasible from a suitably large corpus of training data of "actual"
mathematical back and forth interactions between teachers and students, or
between researchers, maybe?