[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Stas Kolenikov" <skolenik@gmail.com> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
Re: st: gllamm log-likelihood cycle? |

Date |
Tue, 28 Oct 2008 09:17:56 -0500 |

I have not run into this particular kind of a problem with -gllamm-, but I believe this is numeric instability -- your number of integration points may not be high enough. It looks like you do use a good strategy with starting values -- mine are (i) to run some sort of a baseline model, e.g. without random effects, and supply the starting values from that model; or (ii) to run a model with say 10% or 5% subset of your data, get some sort of results out from it (10 or 20 times faster), and supply those as starting values. With the latter strategy, you can also play around with # integration points to see if there are really any problems there -- it should be converging faster. Remember the time it takes -gllamm- to produce one iteration is roughly proportional to _N * (# integration points)^(total # nrf), and the number of iterations depends on how complicated the likelihood surface is (or rather how complicated the numeric approximation to the likelihood surface is). Doubling the number of integration points in your case will lead to 8-fold increase in time per iteration. That may sound devastating, but if it brings numeric stability then you would probably see your model converge in those 20 iterations rather than still hang out there after 40. Having a multiprocessor machine with Stata/MP will certainly help if you run a lot of -gllamm- analyses. On 10/28/08, Claudio Cruz Cazares <Claudio.Cruz@uab.cat> wrote: > Hi to all, > > I am trying to run a two-level multinomial logit with random intercepts. I have a sample of 17'464 observations of 2460 firms, giving and unbalanced panel but with none intermittent missing values. > > The model has been running for 21 days and it has not finished yet ! It has done 40 iterations of adaptive quadrature (when I do not include the random intercept it only develops 7 iterations of adaptive quadrature and takes over 18 hours to finish). > > I try to add one random intercept for each level of the dependent variable (4 levels). The syntax is (I did not collapse the data) : > > tab AID, gen (a) > > eq a2: a2 > eq a3: a3 > eq a4: a4 > > gllamm AID MED GDE IDVL1 ITEA EDAD PERSOC IPNCL1 IPNFL1 IPNML1 IPNDL1 IPRL1 DM1N NCM1N PX PATTL1 InFIDT , i(Identi) link (mlogit) base (1) fam (binom) nrf (3) eqs (a2 a3 a4) from (a) nocorr trace > > > I observed that log-likelihood is not really maximizing since the log-likelihood of some iterations are lower than the before one: > > Iteration 0: log likelihood = -12754.365 > Iteration 6: log likelihood = -7105.7245 > Iteration 17: log likelihood = -6572.9334 > Iteration 30: log likelihood = -6486.7566 > Iteration 32: log likelihood = -6486.8283 > Iteration 34: log likelihood = -6480.9099 > Iteration 37: log likelihood = -6487.3043 > > and so on. > > Is that normal? What could I do?. I tried the same model only with 88 firms and after 7 days it still running and with the same problem of the log-likelihood. > > Thank you in advance. > > > Claudio > > > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ > -- Stas Kolenikov, also found at http://stas.kolenikov.name Small print: I use this email account for mailing lists only. * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**st: gllamm log-likelihood cycle?***From:*Claudio Cruz Cazares <Claudio.Cruz@uab.cat>

- Prev by Date:
**Re: st: xi yearly dummies** - Next by Date:
**st: propensity score matching** - Previous by thread:
**st: gllamm log-likelihood cycle?** - Next by thread:
**RE: st: gllamm log-likelihood cycle?** - Index(es):

© Copyright 1996–2023 StataCorp LLC | Terms of use | Privacy | Contact us | What's new | Site index |