On Thu, Apr 30, 2020 at 4:27 PM Steve Witham <sw@tiac.net> wrote:
From: Marc LeBrun <mlb@well.com>
Don't recall this getting discussed here: https://arxiv.org/pdf/1912.01412.pdf
Amazing results. My only gripe is that it's not clear how much processing time was used by the neural net, but they gave, e.g., Mathematica 30 seconds.
"Table 3: Comparison of our model with Mathematica, Maple and Matlab on a test set of 500 equations. For Mathematica we report results by setting a timeout of 30 seconds per equation. On a given equation, our model typically finds the solution in less than a second." Andy
Interestingly treats symbolic integration as a kind of linguistic translation.
Well, they treat it in the same way people treat linguistic translation: with a sequence-in, sequence out interface. Here they encode both as forward polish strings.
If you think of a neural net as like a matrix multiplication, you can do a state transition including an input and/or output in a single step. It's actually quite a bit like the different transforms of an IFS fractal. A friend of mine was pushing in and reading out balanced parenthesis strings (known to be not-absolutely-trivial) using a hand-designed 2D IFS around Y2K and I thought it was pointless.
https://simondlevy.academic.wlu.edu/files/publications/bics2004.pdf
Leverages the cost asymmetry "trap door" versus differentiation when training.
Do you mean the way they generate problem-solution pairs by starting with the solution and differentiating to get the problem? (I don't remember which sage said that intelligence amounts to inverting functions.)
--Steve
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Andy.Latto@pobox.com