That's positive advice.
My own other idea is that adjusted R-squares are a lousy basis to
compare two models, even of the same kind. They leave out too much
information.
Nick
On Wed, Feb 6, 2013 at 10:37 AM, John Antonakis <John.Antonakis@unil.ch> wrote:
> I think that the only think you can do is to bootstrap the r-squares and see
> if their confidence intervals overlap.
>
> To bootstrap you just do:
>
> E.g.,
>
> sysuse auto
> bootstrap e(r2), seed(123) reps(1000) : reg price mpg weight
>
> You will be interested in either:
>
> e(r2_w) R-squared within model
> e(r2_o) R-squared overall model
> e(r2_b) R-squared between model
>
> See help xtreg with respect to saved results.
>
> Let's see if others have other ideas.
On 06.02.2013 10:22, Panagiotis Manganaris wrote:
>> I need to compare two adjusted r-squared of the same model for two
>> different periods of time (each one spans for a period of years). So far, I
>> have split my data in two groups, those that belong to the period 1998-2004
>> and those that belong to the period 2005-2011. Then I used xtreg on the same
>> model for each group of data. I've derived their adjusted r-squared and I
>> want to know if those two adjusted r-squared are significantly different
>> from each other.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/