Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
Stas Kolenikov <skolenik@gmail.com> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
Re: st: different standard errors with gllamm vs. xtmelogit |

Date |
Tue, 26 Oct 2010 10:56:09 -0500 |

Are the likelihoods the same? Sometimes I find small differences in the likelihood, too, which is indicative of lack of accuracy (in at least one program). The first thing I would do would be to increase the number of integration points (in both models) to say 15 and see if the results change. In most -gllamm- examples from Sophia, the number of integration points is even, although I don't know if there any particular reason for that. On Tue, Oct 26, 2010 at 8:11 AM, de Vries, Robert <r.de-vries08@imperial.ac.uk> wrote: > Hello everyone. I'm having a weird problem with gllamm and xtmelogit when r= unning a fairly simple 2-level random intercepts model. > > The model is predicting a binary health outcome from 1 level 2 variable (me= > angini) and several level 1 variables. It is a sample if 5,4410 people in 1= > 6 countries, with 'country' as the level 2 cluster. > > The xtmelogit model looks like this: > > xi: xtmelogit poorhealth meangini age47 gndr i.education if poorhealth_samp= > le=3D=3D1 || country1 : > > (note that the number of integration points is left at the default of 7) > > It converges fine on iteration 3 with a log likelihood if -15046.989. The r= esult I am interested in is for meangini and in this model it is -0.030 (SE= =3D 0.035) > > The gllamm model is identical (as far as I can tell) > > xi: gllamm poorhealth meangini age47 gndr i.education if poorhealth_sample= =3D=3D1, i(country1) nip(7) link(logit) f(binomial) > > However the coefficient for the same variable is different (-0.041). And th= e Standard Error (0.0043) is over 8 times smaller. > > This is obviously extremely important in interpreting the statistical signi= ficance of the results so I'd appreciate any help anyone might be able to o= ffer as to what's going on. > > Cheers > Rob > > > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ > -- Stas Kolenikov, also found at http://stas.kolenikov.name Small print: I use this email account for mailing lists only. * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**RE: st: different standard errors with gllamm vs. xtmelogit***From:*"de Vries, Robert" <r.de-vries08@imperial.ac.uk>

**RE: st: different standard errors with gllamm vs. xtmelogit***From:*Nick Cox <n.j.cox@durham.ac.uk>

**st: What kind of "if" is this?***From:*Steven Samuels <sjsamuels@gmail.com>

**References**:**st: different standard errors with gllamm vs. xtmelogit***From:*"de Vries, Robert" <r.de-vries08@imperial.ac.uk>

- Prev by Date:
**RE: st: Robust Standard Errors in Paneldatasets** - Next by Date:
**Re: st: RE: problem using catplot to graph vertical bars** - Previous by thread:
**st: different standard errors with gllamm vs. xtmelogit** - Next by thread:
**st: What kind of "if" is this?** - Index(es):