[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: adjusted r square
When thinking about how ordinary r^2 is nondecreasing in k, it is
useful to compare a model with k regressors to one with k+1.
Regression solves n equations for k unknowns. When you add a k+1'st
parameter, the data could decide that the optimal value of that
parameter is exactly zero. That will happen with probability
approaching zero (as Richard says, due to sampling variation), so the
k+1'st estimated parameter will generally be nonzero. Forcing it to
be zero is restricted least squares (-cnsreg-), and the cost of the
constraint is non-negative (that is, e'e will rise or stay the same,
and it will stay the same with Pr->0).
A curiosity about adjusted r^2 is that if the k+1'st variable has a
parameter with a |t| > 1.0, adj r^2 will rise, and vice versa. As 1.0
is not a commonly used point on Student's t distribution, that
suggests that if adj r^2 falls at the margin, the k+1'st variable is
truly uninformative conditional on the k regressors.
Kit Baum, Boston College Economics
An Introduction to Modern Econometrics Using Stata:
On Feb 21, 2007, at 2:33 AM, Richard wrote:
Incidentally, I'm assuming adjusted R^2 does what it purports to
do. Its formula is something I've always just taken on blind
faith. But regardless of how well it works, I think the rationale
for it makes sense.
* For searches and help try: