Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: GPUs and statistical computing


From   "Michael I. Lichter" <MLichter@Buffalo.EDU>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: GPUs and statistical computing
Date   Mon, 24 Aug 2009 01:08:39 -0400

If GPU-like hardware was really useful for statistical and other mathematical programming, it shouldn't be hard to produce a double-precision version. The real question how utile GPU-like hardware would be for general-purpose statistical/mathematical programming. The problem is that fine-grain multiprocessing (using a lot of processors at the same time, not just 2 or 4 like contemporary PCs) only helps for certain kinds of problems, and it often requires a good deal special-purpose programming -- like the programming discussed in the article -- to accomplish anything.

The authors of the cited article say that the GPU works in a SIMD (single instruction, multiple data) fashion, which means that all of the processing units do the same thing to different bits of data. There are occasions in statistical programming when doing the same thing simultaneously to different bits of data is exactly what you want. If you want to find out the mean or the sum of a set of numbers, for example, you can divide your data up into N = # of processors pieces (240 in the article's example) and have each processor sum their piece of data. These piecewise sums would then be available to all of the processors through shared memory (also mentioned in the article) and then, perhaps, half of the processors would add two of the first round sums, a quarter of the processors would add two of the second round sums, etc., until the data were all summed. Not all problems are well suited to this kind of approach, and my general impression is that while SIMD works great for certain applications, it works poorly for most.

Michael

David Airey wrote:
.

In the following interesting article,

http://www.biomedcentral.com/1756-0500/2/149

the authors "...show that a single GPU machine containing three GPUs that costs $2000 performs similarly to 150 CPUs on a compute cluster. These CPUs, including the added infrastructure and support cost of the cluster system, would cost approximately $82,500."

Occasionally I ask Stata Corp about the possibility of parallel computing using graphics processing chips, for example:

http://www.stata.com/statalist/archive/2009-06/msg00329.html

The problem seems to be that GPUs do not generally support double-precision arithmetic. I guess one question for the authors of the above article is: fast yes, but precise?

-Dave
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

--
Michael I. Lichter, Ph.D. <mlichter@buffalo.edu>
Research Assistant Professor & NRSA Fellow
UB Department of Family Medicine / Primary Care Research Institute
UB Clinical Center, 462 Grider Street, Buffalo, NY 14215
Office: CC 126 / Phone: 716-898-4751 / FAX: 716-898-3536

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index