Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: gllamm and computational speed


From   "Verkuilen, Jay" <JVerkuilen@gc.cuny.edu>
To   "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject   RE: st: gllamm and computational speed
Date   Wed, 30 Dec 2009 12:27:35 -0500

Hillel Alpert wrote:

>>I am thinking of a high performance research cluster of high-speed dual processor computers, tied together with the LINUX operating system. I gather that Stata/MP is the version needed. Does anyone know whether gllamm would benefit? I am finding that it is otherwise impossible to use with a large data set.<<

I'm guessing it's not easily parallelized. 

There are a lot of tricks to speed up -gllamm-. You may want to try these. For example if you have categorical data, aggregating to to response patterns rather than individual observations often helps a lot, though if you have a large number of variables that might not provide much simplification. Related, you can often transform a problem from one representation to another and cut down on the effective N. Getting good starting values helps A LOT. This can often be done by fitting simpler models either inside -gllamm- or with other Stata procedures such as -xtmixed-, -xtmelogit- or -xtmepoisson-. No question, though, -gllamm- is not speedy on a big dataset. 

Jay

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index