Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: random effects estimation using gllapred


From   Stas Kolenikov <[email protected]>
To   [email protected]
Subject   Re: st: random effects estimation using gllapred
Date   Fri, 23 Jul 2010 12:36:05 -0500

There's something fatally wrong there. Check if you have the latest
versions of both -gllamm- and -gllapred-. Or just brute force them:
-ssc install gllamm, replace-. Does -xtlogit, re- give the same
results?

On the substantive side, note that the variance of the random effects
is not significant. Meaning, there aren't any. -gllamm- may have
trouble empirically identifying the random effects, since your data
essentially tell it that there is no variability between regions. As a
result, -gllapred- encounters something like 0/0 somewhere, and chokes
on it. I am thinking aloud here though; I would imagine some non-zero
values would have been produced, anyway.

Finally, I would take the level 2 sample size of 11 with a grain of
salt, to say the least.

On Fri, Jul 23, 2010 at 10:05 AM, Eberth, Jan Marie
<[email protected]> wrote:
> I'm having some odd results trying to estimate the random effects from my 2 level, random-intercept logistic model (see code below). The cluster variable (region) has 11 values; thus, I assumed I would have gotten 11 different random effect estimates using the gllapred u command. Instead, I got the same value (0 for m1 and .114 for s1) for every individual, regardless of what region they were in. Does this make sense? Any ideas of what is going wrong?
>
> . xi: gllamm chvac i.newrace bridge_age, i(ph_name) l(logit) f(binom) pweight(pwt) nip(8) adapt
> i.newrace         _Inewrace_1-4       (naturally coded; _Inewrace_1 omitted)
> Running adaptive quadrature
> Iteration 0:    log likelihood = -372.34468
> Iteration 1:    log likelihood = -370.28595
> Iteration 2:    log likelihood =  -369.3438
> Iteration 3:    log likelihood = -369.34378
> Adaptive quadrature has converged, running Newton-Raphson
> Iteration 0:   log likelihood = -369.34378
> Iteration 1:   log likelihood = -369.34378  (backed up)
> Iteration 2:   log likelihood = -369.34211
> Iteration 3:   log likelihood = -369.34211
>
> number of level 1 units = 574
> number of level 2 units = 11
>
> Condition Number = 96.602157
>
> log likelihood = -369.34211
>
> Robust standard errors
> ------------------------------------------------------------------------------
>       chvac |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
> -------------+----------------------------------------------------------------
>  _Inewrace_2 |   .5383982   .5549945     0.97   0.332    -.5493711    1.626168
>  _Inewrace_3 |    .617085    .308448     2.00   0.045     .0125379    1.221632
>  _Inewrace_4 |     .56376   .7312115     0.77   0.441    -.8693881    1.996908
>  bridge_age |   .0546195   .0593968     0.92   0.358    -.0617961    .1710351
>       _cons |  -2.524056   .9048841    -2.79   0.005    -4.297597   -.7505158
> ------------------------------------------------------------------------------
>
> Variances and covariances of random effects
> ------------------------------------------------------------------------------
>
> ***level 2 (ph_name)
>
>    var(1): .01241444 (.02145351)
> ------------------------------------------------------------------------------
>
> . gllapred u_prac, u fsample
> (means and standard deviations will be stored in u_pracm1 u_pracs1)
> Non-adaptive log-likelihood: 0
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000     0.0000
>    0.0000     0.0000     0.0000     0.0000     0.0000
> log-likelihood:0
>
> THANKS! Your help is MUCH appreciated.
>
> Jan Eberth, MSPH
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>



-- 
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index