Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
Christopher Baum <kit.baum@bc.edu> |

To |
"statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> |

Subject |
Re: Re: st: RE: dfuller: why do I get different results? |

Date |
Sat, 19 Nov 2011 09:22:09 -0500 |

<> On Nov 19, 2011, at 2:33 AM, Yuval wrote: > Fisher-type unit-root test for reduct_per > Based on augmented Dickey-Fuller tests > - ----------------------------------------- > Ho: All panels contain unit roots Number of panels = 9547 > Ha: At least one panel is stationary Avg. number of periods = 53.19 > > AR parameter: Panel-specific Asymptotics: T -> Infinity > Panel means: Included > Time trend: Not included > Drift term: Not included ADF regressions: 1 lag > - ------------------------------------------------------------------------------ > Statistic p-value > - ------------------------------------------------------------------------------ > Inverse chi-squared(19060)P 8814.2739 1.0000 > Inverse normal Z 60.7097 1.0000 > Inverse logit t(46659) L* 55.5908 1.0000 > Modified inv. chi-squared Pm -52.4767 1.0000 > - ------------------------------------------------------------------------------ > P statistic requires number of panels to be finite. > Other statistics are suitable for finite or infinite number of panels. > - ------------------------------------------------------------------------------ > > . > I'm happy with the results, because they show that tenants could not > anticipate a long-run mean reduction rates. I would not draw great comfort from these findings. The huge number of panels in the test, based on T->\infty rather than N->\infty, leads to a p-value of 1.0 for all forms of the test statistic. All that means is that the data cannot possibly reject the null that ALL panels have unit roots. That could well result from a sample in which 9,500 panels did and 47 panels didn't, but the test does not have the power to reject. I would run the test -- with a drift term, and probably more than one lag in the DF -- for a relatively small number of panels, perhaps chosen at random. If you look at the example in the xtunitroot fisher help file, a rejection arises when the Z-stat or L*-stat takes on negative values (just as with the standard D-F regression). It might well be if you looked at, say, 150 panels you would find that the test has some power. I am always suspicious of p-values of 1.0000. Kit Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**Re: Re: st: RE: dfuller: why do I get different results?***From:*Yuval Arbel <yuval.arbel@gmail.com>

**Re: Re: st: RE: dfuller: why do I get different results?***From:*Yuval Arbel <yuval.arbel@gmail.com>

- Prev by Date:
**Re: RE: Re:st: xtpcse - "no time periods are common to all panels"** - Next by Date:
**st: Re: Finding Maximum of a series** - Previous by thread:
**st: Finding Maximum of a series** - Next by thread:
**Re: Re: st: RE: dfuller: why do I get different results?** - Index(es):