Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: -xtnbreg- vs. cluster-robust -xtpoisson-


From   Roman Wörner <[email protected]>
To   [email protected]
Subject   st: -xtnbreg- vs. cluster-robust -xtpoisson-
Date   Tue, 08 Jan 2013 14:58:56 +0100

Dear all,

I'm a doctoral student @WU Vienna and as part of my dissertation project I'm about to estimate a panel model (the panel is unbalanced with N=328 and T=8) with a non-negative integer count variable (patent applications) as a DV. I thus did some reading on -xtnbreg- and -xtpoisson- but I still struggle with some open issues.

I would really appreciate any help with the following two questions:

1) The data shows signs of over-dispersion, as the standard deviation is about 3 to 4 times larger than the mean. Based on that my first choice would be to go for -xtnbreg-. Nevertheless, -xtnbreg- doesn't offer cluster-robust standard errors.

In Cameron/Trivedi (2009) - Mircoeconometrics Using Stata, on page 627 they say: "/The negative binomial has the attraction that, unlike Poisson, the estimator is designed to explicitly handle overdispersion, and count data are usually overdispersed. This may lead to improved efficiency in estimation and a default estimate of the VCE that should be much closer to the cluster-robust estimate of the VCE, unlike for Poisson panel commands. At the same time, the Poisson panel estimators rely on weaker distributional assumptions - essentially, correct specification of the mean - and it may be more robust to use the Poisson panel estimators with cluster-robust standard errors."/

If I get them right they recommend using -xtpoisson- with cluster robust standard errors since the standard errors of the non-robust -xtnbreg- are expected to be too low and the robust version of -xtpoisson- should be able to deal with over-dispersion (if I fit a model with -xtnbreg- and fixed effects I get a significant result with a p-value of 0.067; if I fit the same model with -xtpoisson-, fixed effects and cluster robust standard errors the result is insignificant with a p-value of 0.125). Since I'm rather new to STATA and statistics in general I'd be very grateful if some of you could comment on Cameron/Trivedi's (2009) statement/recommendation. Are there any other options?

2) With -xtnbreg- it is possible to calculate bootstrap standard errors. However, when using this option, I receive the following error message: "insufficient observations to compute bootstrap standard errors no results will be saved".

Basically I try to fit the following model:

xtnbreg dv iv1 iv2... iv8 yeardummy1 yeardummy2 ... yeardummy6, fe vce(bootstrap)

As mentioned, this results in the error message described above. If I remove the yeardummies from the model everything is fine and the bootstrap option works. Since the data shows a significant negative time trend I need the yeardummies in the model. Does bootstraping have a problem with dummy variables or did I make another (naive) mistake? Assuming the bootstrap option would work, is the cluster-robust -xtpoisson- still preferable over the bootstrap -xtnbreg-?

Many thanks in advance and best regards,

Roman
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index