Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Testing serial correlation by predicted residuals
From
Waedlich Felix <[email protected]>
To
[email protected]
Subject
st: Testing serial correlation by predicted residuals
Date
Wed, 9 Mar 2011 22:28:35 +0100
Hey Statalist,
I still have problems with my unbalanced panel (140 countries, 27 years) when I want to detect whether my model suffers from serial correlation.
The -xtserial- command indicates autocorrelation even after including a LDV and time effects. Other tests (like -estat dwatson-) dont work due to my unbalanced panel.
I found an alternative solution from Pluemper/Troeger: first estimate -xtpcse- (without LDV), then -predict xb if e(sample)- to generate residuals, -gen residuals = DV - xb- and run -xtpcse residuals independent variables.... und lagged_residuals -. If the lagged residuals are significant (by t-test) my models suffers from autocorrelation. After adding the LDV to 3., looking at t-test again, and check whether the null of independent errors still has to be rejected.
In my case the lagged_residuals are significant (in res model without LDV) and insignificant after adding the LDV.
Do you think this constitutes a valid test for autocorrelation?
And don't get me wrong, unit root test are probably a good idea for my data set, but i cannot get into them now, since I am kinda running out of time.
Best regards,
Felix
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/