Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: RE: RE: gllamm or else?


From   "Mentzakis, Emmanouil" <e.mentzakis@abdn.ac.uk>
To   "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject   st: RE: RE: gllamm or else?
Date   Mon, 30 Jun 2008 09:43:02 +0100

Stefan Boes has written -regoprob- to estimate generalized ordered probit models with random effects.

Regards
Manos


________________________________________
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Verkuilen, Jay [JVerkuilen@gc.cuny.edu]
Sent: 28 June 2008 21:44
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: gllamm or else?

Andrea Bennett wrote:


>>What I am not sure about is how I should perform the estimation. From

a theoretical standpoint, I would assume fixed effects in the first
data set while for the second data set it is an open question, still
(I could group for political main topics, for example). Now, in OLS I
simply would include year dummies for fixed effects. But as far as I
know, I should not use year dummies in gllamm (and usually not in
probit/oprobit) for estimating fixed effects.<<

Year dummies aren't a disaster if you don't have many of them, but of
course they may not do what you need. You probably want to do cluster
corrected robust standard errors (cluster by year).


>>So, a) how would I include a simple fixed effect estimation in gllamm

(as I understand it, using i(year) applies random effects) <<

-xi- or, better, -findit xi3-.


>>Additionally, which gllamm options are best suited
for these kind of estimation? Should I use -adapt- and should I set a
specific value for -nip- ? Using only the -adapt- option (which I read
results in a better estimation) already results in 15min estimation
time for a sub-sample of about 3000 observations while the full data
set contains 25'000 observations. I really fear this will take forever
(besides that my computer dies from overheating!).<<

Numerical integration is inherently slow. In general adaptive quadrature
requires a lot of resources, much more so that non-adaptive quadrature.
-gllamm- is also slow because of its use of numerical derivatives rather
than analytic derivatives. This means estimation can be glacial.

One way to speed things up is to give it good starting values. These can
often be had by making a simpler model (often one that can be estimated
using a different procedure, e.g., ologit) and getting coefficient
values from it. For instance, using non-adaptive quadrature while you're
doing model specification, erring on the side of keeping variables due
to its lower accuracy, might be a viable strategy. Then fit by setting
up your program on Friday before you go home with the plan to check it
on Monday morning.

Someone wrote a random effects version of -gologit2-. I don't know who
it was or the name of the program, unfortunately. Hopefully Rich
Williams will chime in....

Alternatively, other programs (such as Mplus) might be faster.

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


The University of Aberdeen is a charity registered in Scotland, No SC013683.

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index