The tried & true way to get your own instruction is to get that functionality incorporated into a standard benchmark. Also, convince Microsoft, Apple or Oracle to put this functionality into one of their inner loops. That's how the graphics people got their stuff in. That's how the linear algebra people got their stuff in. That's how the crypto people got their stuff in. That's why the Lisp stuff never made it in (aside from the fact that they could never decide exactly what instruction they wanted). Thanks to GCM crypto, Intel now has carryfree multiplications, so GF(2^n) ops are now fast. https://en.wikipedia.org/wiki/CLMUL_instruction_set http://www.samiam.org/galois.html At 05:23 PM 4/1/2016, Adam P. Goucher wrote:
Just out of interest, *why* isn't MXOR in the Intel instruction set? It doesn't seem too complicated to implement, and it's useful both in cryptography and data analysis (see 'persistent homology').
participants (1)
-
Henry Baker