Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: xtnbreg - robusteness check and model relevance


From   Simon Falck <simon.falck@abe.kth.se>
To   "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject   RE: st: xtnbreg - robusteness check and model relevance
Date   Tue, 15 Jan 2013 12:15:11 +0000

Dear Jay, 

Many thanks for your swift reply, and sorry for my delayed thanks and fuzzy description of the problem. 

Yes, goodness of fit is the correct description.

In terms of evaluating model relevance, your suggestion of making a graphical test is also what I had in mind. However, I am not sure what is the correct approach to derive predicted number of events after -xtnbreg-? 

The convenient -prcounts- command do not work after -xtnbreg-, only standard count models. 

The instruction for -xtnbreg- postestimation is using -predict-, but it seems that it is not possible to apply -predict [name], rate- to derive predicted number of events in a fixed effect model. Nevertheless, it is possible to derive predicted number of events; assuming fixed effect is zero, using -predict name, nu0-, but I am not sure if this is the correct approach.

A less preferred solution would be to not to run the empirical model in a panel, but as a standard negative binomial regression model using -nbreg-, and then leaving out the fixed effect which may bias the results. 

Any suggestions on what is the correct approach in this situation?

Thanks in advance,
/Simon





-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of JVerkuilen (Gmail)
Sent: den 10 januari 2013 15:30
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: xtnbreg - robusteness check and model relevance

On Thu, Jan 10, 2013 at 5:34 AM, Simon Falck <simon.falck@abe.kth.se> wrote:
> Dear Statalist,
>
>
> The relevance or precision of a count model seems often to be 
> described in terms of how close the predicted values are to the 
> observed values, usually by comparing the distribution of 
> probabilities of observed and predicted counts. However, from what I 
> understand, it is not possible to use the command -prcounts- after 
> -xtnbreg-, which is used after -nbreg- to derive predicted values. Any 
> suggestions on what is a reasonable strategy in this case?>

I'm not sure what you mean by "robustness test" exactly, but I'll assume you mean goodness of fit test.
More to the point that approach doesn't actually inform you much about the difference between NB and Poisson, because NB and Poisson will tend to make very similar point predictions. Where they will differ is in terms of the level of uncertainty in the model. In general NB will have wider confidence intervals than Poisson, sometimes much wider.

So a reasonable graphical test would be to generate predicted values for important cases and see how often they line up with the corresponding observed value. There are issues with this approach that I can think of but it's a start.

Jay

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index