Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: gllamm, xtmixed, and level-2 standard errors


From   Trey Causey <[email protected]>
To   [email protected]
Subject   st: gllamm, xtmixed, and level-2 standard errors
Date   Tue, 16 Nov 2010 14:27:02 -0800

Greetings all. I am estimating a two-level, random-effects linear
model. I know that gllamm is not the most computationally efficient
option for this, but I am running into some very weird problems. I
have ~21,000 individuals nested in 16 countries. I have 9
individual-level predictors (listed as ind1-9) and 2 country-level
predictors (listed as c1 and c2). When I estimate the model using
gllamm, here are my results:

. gllamm DV ind1 ind2 ind3 ind4 ind5 ind6 ind7 ind8 ind9 c1 c2,i(id)
adapt nip(16)

Running adaptive quadrature
Iteration 0:    log likelihood = -22865.024
Iteration 1:    log likelihood = -22841.735
Iteration 2:    log likelihood =  -22807.82
Iteration 3:    log likelihood = -22797.118
Iteration 4:    log likelihood = -22794.274
Iteration 5:    log likelihood = -22792.672
Iteration 6:    log likelihood = -22791.582
Iteration 7:    log likelihood = -22791.557
Iteration 8:    log likelihood = -22791.428
Iteration 9:    log likelihood = -22791.426


Adaptive quadrature has converged, running Newton-Raphson
Iteration 0:   log likelihood = -22791.426  (not concave)
Iteration 1:   log likelihood = -22791.426  (not concave)
Iteration 2:   log likelihood =  -22789.86
Iteration 3:   log likelihood = -22789.371
Iteration 4:   log likelihood = -22788.767
Iteration 5:   log likelihood = -22788.613
Iteration 6:   log likelihood = -22788.604
Iteration 7:   log likelihood = -22788.604

number of level 1 units = 21360
number of level 2 units = 16

Condition Number = 433.81863

gllamm model

log likelihood = -22788.604

------------------------------------------------------------------------------
     DV     |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        ind1 |  -.0020515    .000392    -5.23   0.000    -.0028198   -.0012833
        ind2 |  -.3839988    .010841   -35.42   0.000    -.4052468   -.3627508
        ind3 |   -.079134   .0113476    -6.97   0.000    -.1013749   -.0568931
        ind4 |   .0800358   .0109386     7.32   0.000     .0585966     .101475
        ind5 |   .0468417   .0048978     9.56   0.000     .0372423
.0564411
        ind6 |   .1685022   .0149735    11.25   0.000     .1391546    .1978497
        ind7 |  -.2057474   .0171485   -12.00   0.000    -.2393579   -.1721368
        ind8 |   -.093775   .0094251    -9.95   0.000    -.1122479   -.0753021
        ind9 |  -.0080367   .0021554    -3.73   0.000    -.0122613   -.0038122
          c1 |    .762577   .0802034     9.51   0.000     .6053813    .9197727
          c2 |   .1763846   .0664327     2.66   0.008     .0461789    .3065903
       _cons |   1.265279   .1023452    12.36   0.000     1.064686    1.465872
------------------------------------------------------------------------------

Variance at level 1
------------------------------------------------------------------------------

 .49269203 (.00476915)

Variances and covariances of random effects
------------------------------------------------------------------------------


***level 2 (id)

   var(1): .09866295 (.01101541)
------------------------------------------------------------------------------

When I estimate the model using xtmixed or xtreg, the output is
essentially the same until I get to the country-level predictors; the
coefficients are slightly different and the standard errors are
approximately *ten* times smaller:

. xtmixed DV ind1 ind2 ind3 ind4 ind5 ind6 ind7 ind8 ind9 c1 c2 || id:,mle
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0:   log likelihood = -22785.965
Iteration 1:   log likelihood = -22785.965
Computing standard errors:
Mixed-effects ML regression                     Number of obs      =     21360
Group variable: id                              Number of groups   =        16
                                                Obs per group: min =       730
                                                               avg =    1335.0
                                                               max =      2875

                                                Wald chi2(11)      =   2296.06
Log likelihood = -22785.965                     Prob > chi2        =    0.0000
------------------------------------------------------------------------------
      DV     |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         ind1 |  -.0020472   .0003917    -5.23   0.000     -.002815   -.0012794
         ind2 |  -.3840113   .0108422   -35.42   0.000    -.4052615    -.362761
         ind3 |  -.0790874   .0113578    -6.96   0.000    -.1013483   -.0568264
         ind4 |   .0799408   .0109411     7.31   0.000     .0584966     .101385
         ind5 |   .0468955   .0048961     9.58   0.000     .0372994    .0564916
         ind6 |   .1686695   .0149734    11.26   0.000     .1393222    .1980167
         ind7 |  -.2054921   .0172501   -11.91   0.000    -.2393018   -.1716824
         ind8 |  -.0941011   .0093698   -10.04   0.000    -.1124655   -.0757367
         ind9 |  -.0079976   .0021584    -3.71   0.000    -.0122279   -.0037672
           c1 |   .6718781   .2659761     2.53   0.012     .1505744    1.193182
           c2 |   .1812668   .1083347     1.67   0.094    -.0310652    .3935988
        _cons |   1.306302   .2079643     6.28   0.000     .8986998    1.713905
------------------------------------------------------------------------------
------------------------------------------------------------------------------
  Random-effects Parameters  |   Estimate   Std. Err.     [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Identity                 |
                   sd(_cons) |   .2033876   .0363049      .1433454    .2885792
-----------------------------+------------------------------------------------
                sd(Residual) |   .7019342   .0033974       .695307    .7086246
------------------------------------------------------------------------------
LR test vs. linear regression: chibar2(01) =  1684.50 Prob >= chibar2 = 0.0000


This is obviously a big problem for establishing significance. I have
read previous threads about this problem with xtlogit but have not
seen it mentioned for linear models nor I have a seen a solution. It
is not immediately clear to me why the estimates or standard errors
should differ at all -- as Rabe-Hesketh and Skrondal say in their
book, gllamm is not as computationally efficient for linear models but
the results should be essentially the same. I have replicated this in
Stata 10 and Stata 11.

Thank you very much.
Trey
-----
Trey Causey
Department of Sociology
University of Washington

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index