Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: RE: Testing serial correlation by predicted residuals


From   Nick Cox <[email protected]>
To   "'[email protected]'" <[email protected]>
Subject   st: RE: Testing serial correlation by predicted residuals
Date   Wed, 9 Mar 2011 21:40:01 +0000

Can't comment on the specific merits of this, but I'd always plot residuals vs lagged residuals too. 

Nick 
[email protected] 

Waedlich Felix

I still have problems with my unbalanced panel (140 countries, 27 years) when I want to detect whether my model suffers from serial correlation.
The -xtserial- command indicates autocorrelation even after including a LDV and time effects. Other tests (like -estat dwatson-) dont work due to my unbalanced panel. 
I found an alternative solution from Pluemper/Troeger:  first estimate -xtpcse- (without LDV), then -predict xb if e(sample)- to generate residuals,  -gen residuals = DV - xb- and run -xtpcse residuals independent variables.... und lagged_residuals -. If the lagged residuals are significant (by t-test) my models suffers from autocorrelation. After adding the LDV to 3., looking at t-test again, and check whether the null of independent errors still has to be rejected.
In my case the lagged_residuals are significant (in res model without LDV) and insignificant after adding the LDV.
Do you think this constitutes a valid test for autocorrelation?
And don't get me wrong, unit root test are probably a good idea for my data set, but i cannot get into them now, since I am kinda running out of time.

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index