[math-fun] nVidia's "CUDA" for math applications?
Does anyone on this group have an opinion about the use of nVidia's new "CUDA" language & architecture for mathematics? (This is not a troll -- I have no relationship with nVidia.) I understand that if you have an nVidia graphics chip in your computer (I think that most of the Apple units now have one), you can utilize the CUDA software to access the parallel capabilities of this chip for general computation. www.nvidia.com/cuda ----- The New York Times September 23, 2008 Nvidia Chip Speeds Up Imaging for Industrial Use By ASHLEE VANCE SANTA CLARA, Calif.  Figuring out the best way to transform a frozen pizza into a perfectly warmed pie, gooey on top and crispy on the bottom, is as much a computer problem as a work of culinary art. General Mills, maker of the TotinoÂs and JenoÂs brands of pizzas, would prefer not to whip up a thousand combinations of mozzarella cheese, tomato paste, crust and chemicals and blast them with microwave radiation. ItÂs a lot cheaper and easier to model different pizzas using a sophisticated computer and only cook up the best candidates. To speed up the task, General Mills turned to computers containing high-powered graphics chips from Nvidia, a Santa Clara, Calif., company best known for making video games look more realistic on game consoles and personal computers. Energy exploration firms, clothing designers, medical companies and financial services firms have also bought systems running on Nvidia chips. All of these companies share a common problem: they need hardware that can analyze a vast quantity of data and do it much faster than standard computers. Nvidia, which dominates the market for stand-alone graphics processors, has a clear lead over competitors to provide this kind of industrial data crunching, thanks to a risky bet the company made several years ago. Deliberately giving up some of its graphics performance, Nvidia created a new interface, released in 2006, that lets computer programmers easily tap the hundreds of processing engines on a graphics chip to handle other tasks that require a large number of simultaneous calculations. ÂA couple of billion dollars in R.& D. later, scientists and researchers around the world have come out to thank us, said Jen-Hsun Huang, NvidiaÂs co-founder and chief executive. If the companyÂs expensive gamble pays off, Nvidia could break out of its graphics niche and become a far more significant player in the computing landscape. ÂOnce you have lots and lots of companies writing programs that run on these types of products, then you have this potential of a snowball effect, said Hans Mosesmann, a semiconductor analyst with Raymond James & Associates. However, Intel and Advanced Micro Devices, the dominant players in the market for conventional computer processors, are working on their own high-end chips that will be optimized for complex computer modeling. More immediately, Intel and A.M.D. are also attacking NvidiaÂs core business  the graphics chips that go into ordinary laptops and desktop PCs. Nvidia is already feeling the pressure. In August, it reported a 5 percent year-on-year drop in second-quarter revenue, to $893 million, and recorded a $196 million charge tied to the replacement of faulty products shipped in notebooks. Last week, Nvidia laid off about 360 workers, or 6.5 percent of its work force. Its shares closed Monday at $11.17, down from nearly $40 last October. NvidiaÂs graphics processor units differ from mainstream Intel and A.M.D. processors in how they handle software. The Nvidia chips break problems up into many parts and then try to solve those problems at the same time. Standard processors, by contrast, crank through one problem at a time as quickly as possible. The Nvidia technique has proved remarkably adept at handling the display of pixels on a screen, where there are a lot of parts changing at the same time  typically video images. But some companies and research institutions are finding that graphics processors can handle other kinds of work 10 to 150 times faster than standard processors by breaking up large problems into smaller tasks and reassembling the results later. For example, Techniscan Medical Systems of Salt Lake City has turned to NvidiaÂs graphics processors to speed up a three-dimensional breast scanning device that could be used for cancer detection if the machine received regulatory approval. Techniscan must turn tens of gigabytes of raw data generated by transmitting pulses of energy through a breast submerged in water into medical image files that consume just 100 megabytes. This whole process used to take a couple of hours using IntelÂs processors and now takes just 15 minutes with NvidiaÂs hardware. ÂIf we get it down to 15 minutes per scan, then a patient can come in, fill out their paperwork, have the test and get the results in a single visit to the doctor, said Jim Hardwick, a software engineer at Techniscan. ÂThis is extremely important. The oil and gas industry, which tries to determine the most promising locations to drill wells, has found that NvidiaÂs technology can vastly decrease the time needed to make sense of reams of geologic data. SeismicCity, based in Houston, analyzes data given off by manmade explosions used to ripple waves through the ground and detect changes in density that may point to fruitful places for drilling a new oil well. In the past, the company was limited by the performance of standard microprocessors. It wanted to tweak its algorithm more often to produce more accurate data for its oil company customers. ÂWith the Nvidia chips, we sped up our calculations by 10 times, said David Kessler, SeismicCityÂs president. ÂThat doesnÂt happen every day. Now when oil companies drill wells, their success rate should go up. Nvidia has shipped close to 100 million processors with the new programmable interface, called Compute Unified Device Architecture, or CUDA. The programming kit has been downloaded more than 150,000 times. Academic institutions have embraced the technology, with the University of Illinois, for example, setting up a teaching center at Urbana-Champaign dedicated to CUDA and NvidiaÂs corresponding Tesla servers. A few companies, including Acceleware of Canada, have centered their business around NvidiaÂs software and hardware, selling customized solutions to General Mills, Eli Lilly, Nokia and other industrial customers. ÂWe think that the number of people interested in this technology is in the tens of millions, Mr. Huang said. ÂIt is not just here. It is in China and India and Russia and Brazil. Many billions of dollars is our rough estimate for the size of the market. Sangeeth Peruri, an analyst at J.& W. Seligman, said NvidiaÂs new technology could transform the company. ÂWe have to see whether this will ramp or not, but I do think this is pretty amazing technology, he said. NvidiaÂs competitors are more dismissive. Executives at A.M.D. and Intel argue that a rather small set of very sophisticated software can take advantage of the CUDA design. ÂThey are severely restricted and limited, said Dave Hofer, a director of marketing at Intel. ÂIn the short term, it is not a massive threat. Intel plans to release a competing product called Larrabee in 2009 or 2010. A.M.D. is promoting a fledgling programming layer from Apple called OpenCL, or Open Computing Language, which A.M.D. hopes will blunt CUDAÂs momentum should it be ready for widespread use as expected in 2009. Mr. Huang, however, said the competition is underestimating his companyÂs lead. ÂWe will have shipped 300 million units with CUDA by the time those other guys are ready, Mr. Huang said. ÂWe probably have a four-year lead on Intel.Â
On Tue, 23 Sep 2008, Henry Baker wrote:
Does anyone on this group have an opinion about the use of nVidia's new "CUDA" language & architecture for mathematics? (This is not a troll -- I have no relationship with nVidia.) [...]
I get excited about GPU processing every few years, then remember why I lost interest last time. The architectures are very constricting, so that even tight-loop things like crypto take a lot of work to make them run fast. If you're only doing matrix multiplies you might be okay. Worse, the architecture keeps changing, so your code is likely to go obsolete within a year or two. Of course, eventually we can expect the architecture to settle down as we figure out what we're doing, but I don't think we're quite there yet.
i think that higher-level languages like cuda and rapidmind are supposed to prevent your code from becoming obsolete. your algorithm had better be highly parallelizable, or else. also, i believe you have only 18 bits of accuracy with floating-point values on a graphics chip. that's even worse than single precision floats, which have 23 bits. bob --- Jason wrote:
On Tue, 23 Sep 2008, Henry Baker wrote:
Does anyone on this group have an opinion about the use of nVidia's new "CUDA" language & architecture for mathematics? (This is not a troll -- I have no relationship with nVidia.) [...]
I get excited about GPU processing every few years, then remember why I lost interest last time. The architectures are very constricting, so that even tight-loop things like crypto take a lot of work to make them run fast. If you're only doing matrix multiplies you might be okay. Worse, the architecture keeps changing, so your code is likely to go obsolete within a year or two.
Of course, eventually we can expect the architecture to settle down as we figure out what we're doing, but I don't think we're quite there yet.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (3)
-
Henry Baker -
Jason -
Robert Baillie