Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: xtnbreg, nbreg, and tests of assumptions
From 
 
"Mary E. Mackesy-Amiti" <[email protected]> 
To 
 
[email protected] 
Subject 
 
Re: st: xtnbreg, nbreg, and tests of assumptions 
Date 
 
Wed, 15 Dec 2010 11:03:07 -0600 
I think you meant to say that your *dependent* variable is count with 
overdispersion, and your *independent* variable is time invariant.  
Independent variables predict dependent variables.
Please post the -xtnbreg- command you used and the results you find 
questionable.
On 12/15/2010 7:30 AM, Dalhia wrote:
Hi,
I am trying to figure out whether I should use nbreg (with
correction for autocorrelation and heteroskedasticity) or xtnbreg (with
random effects)? My independent variable is count with significant overdispersion, and I have panel data (cross sectional time series). One of my main dependent variables is time invariant, and therefore I cannot use xtnbreg fixed effects. xtnbreg random effects is giving me some funny results that are hard to believe, but how should I decide which one I should be using (xtnbreg or nbreg)? Also, are there tests to check if the assumptions of these models are satisfied in my data?
Finally, I have two independent variables,  predicted by the same dependent variables. But I can't find a version of SUR appropriate for negative binomial. I am not really interested in cross-equation testing. If I don't do a seemingly unrelated regression, does that bias the coefficients or does it produce inefficient
coefficients.
Thanks so much. I really appreciate your help.
Dalhia
--- On Wed, 12/15/10, Maarten buis<[email protected]>  wrote:
From: Maarten buis<[email protected]>
Subject: Re: st: Difference between xtlogit, xtmelogit, gllamm
To: [email protected]
Date: Wednesday, December 15, 2010, 10:28 AM
--- On Wed, 15/12/10, Rajaram Subramanian Potty wrote:
I have event history data and this data has been converted
into discrete time to fit discrete time hazard model.
Now, I want to fit a multilevel model. But there are: three
different proceudres such as xtlogit, xtmelogit and gllamm.
I want to know which procedure is more appropriate for
analysing the discrete time data.
All three will do for a basic multilevel model for the odds
(not the hazard) of survival. If you want to model a
multilevel model for the hazard of survival you can use
-gllamm- with the cll link function. The difference between
-xtlogit- and -xtmelogit- is that the latter can accomodate
more complex multilevel structures.
Hope this helps,
Maarten
--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany
http://www.maartenbuis.nl
--------------------------
      
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
--
Mary Ellen Mackesy-Amiti, Ph.D.
Research Assistant Professor
Community Outreach Intervention Projects (COIP)
School of Public Health m/c 923
Division of Epidemiology and Biostatistics
University of Illinois at Chicago
ph. 312-355-4892
fax: 312-996-1450
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/