Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: xtnbreg, nbreg, and tests of assumptions


From   Dalhia <[email protected]>
To   [email protected]
Subject   Re: st: xtnbreg, nbreg, and tests of assumptions
Date   Wed, 15 Dec 2010 09:12:34 -0800 (PST)

oops sorry. don't know what I was thinking. Thanks Mary for the correction.

Here are the results for xtnbreg that don't make sense. Basically, I have panel data on hospitals (private, public, and associates), and looking at the averages of the number of training days for each hospital type, I can see that private hospitals have lower number of training days compared to public hospitals. Associate hospitals fall in the mid-range. However, when I run this model using xtnbreg (with random effects), I get a funny result. It looks like public and associates have lower rate of training days in a year compared to private. Am I interpreting the coefficients wrong or is there something else going on? (output attached below). 

When I run it using nbreg I get the opposite result (the result I was expecting - public and associates are have greater rate of training per year compared to private). 

Thanks for your help.
Dalhia

. xtnbreg train asso pub if train<12000, re irr
note: you are responsible for interpretation of non-count dep. variable

Fitting negative binomial (constant dispersion) model:

Iteration 0:   log likelihood = -1341968.9  
Iteration 1:   log likelihood = -1341967.5  
Iteration 2:   log likelihood = -1341967.5  

Iteration 0:   log likelihood = -504693.72  
Iteration 1:   log likelihood = -35614.007  
Iteration 2:   log likelihood =  -35604.55  
Iteration 3:   log likelihood = -35604.545  
Iteration 4:   log likelihood = -35604.545  

Iteration 0:   log likelihood = -35604.545  
Iteration 1:   log likelihood = -35595.175  
Iteration 2:   log likelihood = -35595.145  
Iteration 3:   log likelihood = -35595.145  

Fitting full model:

Iteration 0:   log likelihood = -81145.913  
Iteration 1:   log likelihood = -49940.372  (not concave)
Iteration 2:   log likelihood = -42786.562  (not concave)
Iteration 3:   log likelihood = -35793.307  
Iteration 4:   log likelihood =  -33256.88  
Iteration 5:   log likelihood = -33190.785  
Iteration 6:   log likelihood = -33150.666  
Iteration 7:   log likelihood = -33150.622  
Iteration 8:   log likelihood = -33150.622  

Random-effects negative binomial regression     Number of obs      =      7522
Group variable: fi                              Number of groups   =      1873

Random effects u_i ~ Beta                       Obs per group: min =         1
                                                               avg =       4.0
                                                               max =         5

                                                Wald chi2(2)       =      7.29
Log likelihood  = -33150.622                    Prob > chi2        =    0.0261

------------------------------------------------------------------------------
       train |        IRR   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
        asso |   .8803461   .0551126    -2.04   0.042     .7786914    .9952712
         pub |   .9029852   .0380889    -2.42   0.016     .8313349    .9808108
-------------+----------------------------------------------------------------
       /ln_r |  -.8268984   .0334362                     -.8924322   -.7613647
       /ln_s |   .7346747   .0714634                      .5946091    .8747404
-------------+----------------------------------------------------------------
           r |   .4374038   .0146251                      .4096582    .4670286
           s |   2.084804   .1489872                      1.812322    2.398253
------------------------------------------------------------------------------
Likelihood-ratio test vs. pooled: chibar2(01) =  4889.04 Prob>=chibar2 = 0.000

. 



--- On Wed, 12/15/10, Mary E. Mackesy-Amiti <[email protected]> wrote:

> From: Mary E. Mackesy-Amiti <[email protected]>
> Subject: Re: st: xtnbreg, nbreg, and tests of assumptions
> To: [email protected]
> Date: Wednesday, December 15, 2010, 7:03 PM
> I think you meant to say that your
> *dependent* variable is count with 
> overdispersion, and your *independent* variable is time
> invariant.  
> Independent variables predict dependent variables.
> 
> Please post the -xtnbreg- command you used and the results
> you find 
> questionable.
> 
> 
> On 12/15/2010 7:30 AM, Dalhia wrote:
> > Hi,
> >
> > I am trying to figure out whether I should use nbreg
> (with
> > correction for autocorrelation and heteroskedasticity)
> or xtnbreg (with
> > random effects)? My independent variable is count with
> significant overdispersion, and I have panel data (cross
> sectional time series). One of my main dependent variables
> is time invariant, and therefore I cannot use xtnbreg fixed
> effects. xtnbreg random effects is giving me some funny
> results that are hard to believe, but how should I decide
> which one I should be using (xtnbreg or nbreg)? Also, are
> there tests to check if the assumptions of these models are
> satisfied in my data?
> >
> > Finally, I have two independent variables, 
> predicted by the same dependent variables. But I can't find
> a version of SUR appropriate for negative binomial. I am not
> really interested in cross-equation testing. If I don't do a
> seemingly unrelated regression, does that bias the
> coefficients or does it produce inefficient
> > coefficients.
> >
> > Thanks so much. I really appreciate your help.
> > Dalhia
> >
> > --- On Wed, 12/15/10, Maarten buis<[email protected]> 
> wrote:
> >
> > From: Maarten buis<[email protected]>
> > Subject: Re: st: Difference between xtlogit,
> xtmelogit, gllamm
> > To: [email protected]
> > Date: Wednesday, December 15, 2010, 10:28 AM
> >
> > --- On Wed, 15/12/10, Rajaram Subramanian Potty
> wrote:
> >> I have event history data and this data has been
> converted
> >> into discrete time to fit discrete time hazard
> model.
> >> Now, I want to fit a multilevel model. But there
> are: three
> >> different proceudres such as xtlogit, xtmelogit
> and gllamm.
> >> I want to know which procedure is more appropriate
> for
> >> analysing the discrete time data.
> > All three will do for a basic multilevel model for the
> odds
> > (not the hazard) of survival. If you want to model a
> > multilevel model for the hazard of survival you can
> use
> > -gllamm- with the cll link function. The difference
> between
> > -xtlogit- and -xtmelogit- is that the latter can
> accomodate
> > more complex multilevel structures.
> >
> > Hope this helps,
> > Maarten
> >
> > --------------------------
> > Maarten L. Buis
> > Institut fuer Soziologie
> > Universitaet Tuebingen
> > Wilhelmstrasse 36
> > 72074 Tuebingen
> > Germany
> >
> > http://www.maartenbuis.nl
> > --------------------------
> >
> >
> >       
> >
> > *
> > *   For searches and help try:
> > *   http://www.stata.com/help.cgi?search
> > *   http://www.stata.com/support/statalist/faq
> > *   http://www.ats.ucla.edu/stat/stata/
> >
> >
> >
> >
> >
> > *
> > *   For searches and help try:
> > *   http://www.stata.com/help.cgi?search
> > *   http://www.stata.com/support/statalist/faq
> > *   http://www.ats.ucla.edu/stat/stata/
> >
> >
> 
> -- 
> Mary Ellen Mackesy-Amiti, Ph.D.
> Research Assistant Professor
> Community Outreach Intervention Projects (COIP)
> School of Public Health m/c 923
> Division of Epidemiology and Biostatistics
> University of Illinois at Chicago
> ph. 312-355-4892
> fax: 312-996-1450
> 
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
> 


      

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index