Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Lachenbruch, Peter" <Peter.Lachenbruch@oregonstate.edu> |

To |
"'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> |

Subject |
RE: st: Testing of differences in R-square |

Date |
Wed, 31 Mar 2010 08:09:13 -0700 |

To amplify a bit - this is specifically demonstrated in Draper and Smith as "extra SS" - and most likely in others. My initial reaction was the same as Steve's, but by the time I read it, the answer was already on the list. The use of e(sample) was an idea that hadn't occurred to me (of course, all data in regression are complete! (-:) ) Tony Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Steve Samuels Sent: Tuesday, March 30, 2010 3:24 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: Testing of differences in R-square "R-squares is equivalent to assessing a difference between the mean square errors." I should clarify: This statement is true because the R-squares are being computed on the same data and for the same response variable. To ensure this condition, restrict the sample to observations with non-missing values for all predictors in the two regressions. On Tue, Mar 30, 2010 at 10:24 AM, Steve Samuels <sjsamuels@gmail.com> wrote: > Vuong's test is likelihood based and corrects for the differing number > of parameters in two models. For your problem you can directly > compared adjusted R-squares, which also correct for the number of > parameters.. However assessing a difference between two adjusted > R-squares is equivalent to assessing a difference between the mean > square errors. So I suggest you bootstrap a difference in log mean > square errors. Because you want to bootstrap two regressions, you'll > need to write your own program. See: > http://www.ats.ucla.edu/stat/Stata/faq/ownboot.htm > > Steve > Steven Samuels sjsamuels@gmail.com 18 Cantine's Island Saugerties NY 12477 USA Voice: 845-246-0774 Fax: 206-202-4783 * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**st: Testing of differences in R-square***From:*<Tonny.Stenheim@hibu.no>

**Re: st: Testing of differences in R-square***From:*Steve Samuels <sjsamuels@gmail.com>

**Re: st: Testing of differences in R-square***From:*Steve Samuels <sjsamuels@gmail.com>

- Prev by Date:
**AW: st: AW: -findname- available from SSC** - Next by Date:
**st: RE: LARS ado??** - Previous by thread:
**RE: st: Testing of differences in R-square** - Next by thread:
**st: Calculating Moving Averages with Missing Values** - Index(es):