Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Why F-test with regression output

From   John Antonakis <>
Subject   Re: st: Why F-test with regression output
Date   Thu, 05 May 2011 13:57:33 +0200


The F-test for all betas = 0 is useful only if it is theoretically useful; otherwise, it doesn't mean much. Suppose I want to estimate the effect of x on y and x is a "new kid" on the block--so, I stick in a whole bunch of controls. It is possible that the overall F-test is not significant because most of the controls don't do much. OK you'll say, but then they were not well selected; however, if the theory suggest that we must partial out the variance due to those controls and the coefficient of x is significant but the F-test is not, I think that these results are very meaningful.

I too think, as Joerg suggested, that the importance of the F-test is probably due to psychological experimental research, where one or two variables were exogenously manipulated, so the F-test there would indicate whether the experiment worked (though again, it is possible that one controls for a competing treatment, or many competing treatments that are placebos that might not work and which might make the F-test non significant).



Prof. John Antonakis
Faculty of Business and Economics
Department of Organizational Behavior
University of Lausanne
Internef #618
CH-1015 Lausanne-Dorigny
Tel ++41 (0)21 692-3438
Fax ++41 (0)21 692-3305

Associate Editor
The Leadership Quarterly

On 05.05.2011 06:15, Richard Williams wrote:
At 04:19 PM 5/4/2011, Steven Samuels wrote:

Nick, I've seen examples where every regression coefficient was non-significant (p>0.05), but the F-test rejected the hypothesis that all were zero. This can happen even when the predictors are uncorrelated. So I don't consider the test superfluous.


I also find the omnibus test helpful.

If, say, there were a lot of p values of .06, it is probably very likely that at least one effect is different from 0.

If variables are highly correlated, the omnibus F may correctly tell you that at least one effect differs from 0, even if you can't tell for sure which one it is.

In both of the above cases, if you just looked at P values for individual coefficients, you might erroneously conclude that no effects differ from zero when it is more likely that at least one effect does.

If the omnibus F isn't significant, there may not be much point in looking at individual variables. If you have 20 variables in the model, one may be significant at the .05 level just by chance alone, but the omnibus F probably won't be. That is, a fishing expedition for variables could lead to a few coefficients that are statistically significant but the omnibus F isn't.

Incidentally, you might just as easily ask why the Model Chi Square gets reported in routines like logistic and ordinal regression. The main advantage of Model Chi Square over omnibus F is that Model Chi Square is easier to use when comparing constrained and unconstrained models (e.g. if model 1 has x1 and x2, and model 2 has x1, x2, x3, and x4, I can easily use the model chi-squares to test whether or not the effects of x3 and/or x4 significantly differ from 0).

Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME:   (574)289-5227
EMAIL:  Richard.A.Williams.5@ND.Edu

*   For searches and help try:
*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index