Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: question on GLLAMM

From   Michael Ingre <[email protected]>
To   [email protected]
Subject   Re: st: question on GLLAMM
Date   Thu, 19 Aug 2004 21:08:15 +0200

On 2004-08-19, at 16.53, Stanislav Kolenikov wrote:

If you have a
dataset of say 10000 individuals, and you also want to take some
sample design clustering into account... you are doomed to wait for a
few weeks for the model to converge.
There are however techniques to speed up -gllamm- that can be extremely efficient with large datsets, if the number of response patterns are limited. Lets say you have 10.000 observations but only 5*3*10 possible response patterns; if you use -contract , freq(_freq)- you end up with only 150 observations with varying frequency weights. This dataset should be estimated with -gllamm , weight(_freq)- in about the same amount of time as a dataset with only 150 individuals.

unless you have the latest Cray at your disposal, the model should be
kept to a moderate size. [ .... ] (It took a few days with
-oprobit- link and the panel structure with just one random effect on
my computer.)
That is so true. It just happened that two days ago I got a paper accepted for publication that summarise 323 hours (!) of computing time (Mac G4 800Mhz). A total of 10 models of varying complexity and dependent variables were estimated, 3 binary (logit) and 7 ordinal (ologit). The data was longitudinal with 60 repeated measures of 17 subjects - 1020 obs. With only two random effects I expect the models to converge in about 1/10 of the time. With 4 random effects I expect one model to converge in 200-300 hours and with 5+ random effects I probably need a Cray (or maybe a G5).


* For searches and help try:

© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index