# st: Re: adjusted r square

 From Kit Baum To statalist@hsphsun2.harvard.edu Subject st: Re: adjusted r square Date Wed, 21 Feb 2007 07:10:25 -0500

When thinking about how ordinary r^2 is nondecreasing in k, it is useful to compare a model with k regressors to one with k+1. Regression solves n equations for k unknowns. When you add a k+1'st parameter, the data could decide that the optimal value of that parameter is exactly zero. That will happen with probability approaching zero (as Richard says, due to sampling variation), so the k+1'st estimated parameter will generally be nonzero. Forcing it to be zero is restricted least squares (-cnsreg-), and the cost of the constraint is non-negative (that is, e'e will rise or stay the same, and it will stay the same with Pr->0).

A curiosity about adjusted r^2 is that if the k+1'st variable has a parameter with a |t| > 1.0, adj r^2 will rise, and vice versa. As 1.0 is not a commonly used point on Student's t distribution, that suggests that if adj r^2 falls at the margin, the k+1'st variable is truly uninformative conditional on the k regressors.

Kit Baum, Boston College Economics
http://ideas.repec.org/e/pba1.html
An Introduction to Modern Econometrics Using Stata:
http://www.stata-press.com/books/imeus.html

On Feb 21, 2007, at 2:33 AM, Richard wrote:

```Incidentally, I'm assuming adjusted R^2 does what it purports to
do.  Its formula is something I've always just taken on blind
faith.  But regardless of how well it works, I think the rationale
for it makes sense.
```
```*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
```