On Tue, 23 Sep 2008, Henry Baker wrote:
Does anyone on this group have an opinion about the use of nVidia's new "CUDA" language & architecture for mathematics? (This is not a troll -- I have no relationship with nVidia.) [...]
I get excited about GPU processing every few years, then remember why I lost interest last time. The architectures are very constricting, so that even tight-loop things like crypto take a lot of work to make them run fast. If you're only doing matrix multiplies you might be okay. Worse, the architecture keeps changing, so your code is likely to go obsolete within a year or two. Of course, eventually we can expect the architecture to settle down as we figure out what we're doing, but I don't think we're quite there yet.