[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: adjusted r square
Ranjita majumder Singh wrote:
> thank you for your response. I had been told to use adjusted R2 because my
> R2 were decreasing when I was adding new variables. Hence I have to use
> adjusted R2
This wasn't your question, but simply using the "adjusted R2" because R2 had
*decreased* after you have added a new variable is definetely not a good
idea. In fact, R2 *must* have increased after you have added a new variable.
If not it is a clear indication that you have included a variable with
several missing values so that observations that are included in the reduced
model drops out of the model in the refined one.
Before looking at the adjusted R2 you should take care that both models are
calculated for the same set of observations. I leave it up to you whether you
want to use the R2 or the adjusted R2 to compare the models, then.
However, as an aside: I do not find the arguments for the adjusted R2 very
convincing. It is sometimes said that you have to be punished for including
additional variables in a model. But why? Because the R2 increases? Why do I
need to be punished for this? It is just a simple fact that I can explain
more variance with an additional variable. Punishment and especially the
amount of punishment is pure metaphysics. The set of control variables should
be compiled on theoretical reasons alone. If your model contains some
variables that should be excluded on theortical reasons, exclude them (or you
will get punished by your reviewer). Likewise, if your model does not include
a variable in the model that should be there on theortical reasons, include
it (or your reviewer will punish you as well).
Needless to say that it might happen that your reviewer might punish you for
not using the adjusted R2. ;-)
* For searches and help try: