Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | "JVerkuilen (Gmail)" <jvverkuilen@gmail.com> |
To | statalist@hsphsun2.harvard.edu |
Subject | Re: st: xtmelogit slow, not converging |
Date | Wed, 19 Sep 2012 00:01:57 -0400 |
On Tue, Sep 18, 2012 at 7:58 PM, Valeria Fajardo <valeriabutler@gmail.com> wrote: > <snip> > Besides reducing the size of my sample, how can I make this run more > smoothly? How much time should this type of routine be expected to > take? That sample size is quite large and you have a complicated model. It could be quite a while. Try fitting it with fixed effects only first to get rational starting values. Then add a random intercept and see how long that takes. I'd put difficult on, too. With a relatively complex GLMM you almost always have to go through a few simpler models to get a feel for things and generate good starting values before fitting the full model. If you can't get that to work, you might want to sample from your dataset to get the number to a smaller value, but I've run 100,000 on a not all that impressive five year old MacBook Pro and Stata did the job quite well in a not-crazy time period. Sampling down to 10% of the size of your dataset to do specification search isn't a bad idea anyway. >I'm finding it so cumbersome I'm considering the switch to SPSS. 99.44% guarantee that will be worse and more cumbersome. Their GLMM program is dicey in terms of what control it gives the user and very slow. If you want to switch programs, try R and use lme4 or SAS with GLIMMIX which will allow you to use the much faster but less accurate PQL estimation, or get your hands on HLM, which has a higher order Laplace approximation that is very fast. I really doubt you'll beat Stata for estimation speed in the vast majority of cases, though. * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/