Call it Stream Computing, call it GP-GPU: It all refers to the same thing—using the massively parallel-processing power of modern graphics cards to perform certain non-graphics tasks.
Some types computing problems require crunching numbers (usually floating-point math) on huge sets of data. Some of these require lots of branching logic, others can be handled in more of a streaming fashion. The latter type of task is the perfect candidate for GP-GPU (general-purpose computing on GPUs) acceleration.
Today, ATI announced its Stream Computing Initiative. ATI's products are already in desktop and laptop computers, game consoles, handheld devices, and televisions. Now the company aims to move into enterprise computing, targeting the market traditionally served by HPC, or High Performance Computing. Think computing clusters made to solve big-math problems—atmospheric analysis, airflow simulation for cars or planes, crash simulation, seismic analysis in the oil industry, medical research, cryptography, that kind of thing.
The Future of Parallel Computing?
Are GPUs the future of massively parallel computing tasks? We wouldn't go so far as to say that. The CPU will continue to be very important, and ATI reiterated several times that it is "not trying to compete with the CPU." Still, there are certain kinds of tasks that, given the right software, will map very well onto the GPU's processing capabilities. When that happens, the increase in raw performance, performance per watt, and performance per cubic foot goes through the roof. We're only in the infancy of this technology, and software solutions like PeakStream will make it easier for supercomputer clusters to take advantage of the highly parallel floating-point-crunching power of the GPU. Future graphics architectures from both ATI and Nvidia will be better suited to GP-GPU tasks, offering even better performance.
High Performance Computing is all about parallel processing, and everyone who builds a supercomputer worries about how they're going to give it enough power, how they're going to cool it, how much space and weight it takes up, and how much it costs. If a GPU can run many parts of the simulation software 10 to 20 times faster than a CPU, you can quickly see how a bunch of 3U servers with two or even four graphics cards would dramatically increase performance without taking up more space or power. It could be that, within a few years, the most powerful supercomputers in the world will leverage hundreds of graphics cards. ATI today made it clear that they're working to make that happen, and they already have real-world scientific applications up and running with GPU acceleration. We expect to hear much more about this from Nvidia, too, in the not-too-distant future, as that company has also been working on the GP-GPU problem for years.
Monday, October 02, 2006
ATI Steps Into Stream Computing
From Extremetech:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment