Interesting problem. Detexify maybe works OK on single TEX glyphs, but not on multipart notations: Putting a little “n” in the crook of the root confused it into suggesting \approx, \trademark and \varpi. Of course if you can get the TEX sources (as are sometimes on arXiv.org <http://arxiv.org/>) that could be helpful. But that doesn’t necessarily win when generic glyphs imbue meaning by appearing in a specific context. (Heh, for extra credit, distinguish Legendre, Jacobi and Kronecker symbols) Some kind of semantic graph description along the lines Hilarie suggests would be interesting (perhaps combined with a semantic web-search for similar articles to index context). It’d be interesting to see what might be done with modern OCR of (especially older) manuscripts. Perhaps leverage some of that brute machine learning on raw images that’s nowadays all the rage? I tried Google image search on bitmaps grabbed out of the Wikipedia articles. No cigar: For n-th root it returned "Best guess for this image: number” For Legendre symbol it returned "Best guess for this image: circle” Well, maybe a bubblegum cigar.
On Jul 27, 2016, at 6:11 PM, James Propp <jamespropp@gmail.com> wrote:
I'd suggest using detexify as a first step. Once you know what to call a symbol, you can find out what it means.
Jim Propp
On Wednesday, July 27, 2016, Hilarie Orman <ho@alum.mit.edu> wrote:
There ought to be a way to develop an ordered taxonomy based on the typographic elements of the notational glyph combination. For Legendre, it could be <2 curves, expr over expr>, for example. A "pi" would be something like <3 lines, 2 parallel>
We need the "5K Math Symbol Dictionary".
Hilarie
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com <javascript:;> https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun