Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down at the end of May, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
John Antonakis <John.Antonakis@unil.ch> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
Re: st: Why F-test with regression output |

Date |
Thu, 05 May 2011 13:57:33 +0200 |

Hi:

Best, J. __________________________________________ Prof. John Antonakis Faculty of Business and Economics Department of Organizational Behavior University of Lausanne Internef #618 CH-1015 Lausanne-Dorigny Switzerland Tel ++41 (0)21 692-3438 Fax ++41 (0)21 692-3305 http://www.hec.unil.ch/people/jantonakis Associate Editor The Leadership Quarterly __________________________________________ On 05.05.2011 06:15, Richard Williams wrote:

At 04:19 PM 5/4/2011, Steven Samuels wrote:Nick, I've seen examples where every regression coefficient wasnon-significant (p>0.05), but the F-test rejected the hypothesis thatall were zero. This can happen even when the predictors areuncorrelated. So I don't consider the test superfluous.SteveI also find the omnibus test helpful.If, say, there were a lot of p values of .06, it is probably verylikely that at least one effect is different from 0.If variables are highly correlated, the omnibus F may correctly tellyou that at least one effect differs from 0, even if you can't tellfor sure which one it is.In both of the above cases, if you just looked at P values forindividual coefficients, you might erroneously conclude that noeffects differ from zero when it is more likely that at least oneeffect does.If the omnibus F isn't significant, there may not be much point inlooking at individual variables. If you have 20 variables in themodel, one may be significant at the .05 level just by chance alone,but the omnibus F probably won't be. That is, a fishing expedition forvariables could lead to a few coefficients that are statisticallysignificant but the omnibus F isn't.Incidentally, you might just as easily ask why the Model Chi Squaregets reported in routines like logistic and ordinal regression. Themain advantage of Model Chi Square over omnibus F is that Model ChiSquare is easier to use when comparing constrained and unconstrainedmodels (e.g. if model 1 has x1 and x2, and model 2 has x1, x2, x3, andx4, I can easily use the model chi-squares to test whether or not theeffects of x3 and/or x4 significantly differ from 0).------------------------------------------- Richard Williams, Notre Dame Dept of Sociology OFFICE: (574)631-6668, (574)631-6463 HOME: (574)289-5227 EMAIL: Richard.A.Williams.5@ND.Edu WWW: http://www.nd.edu/~rwilliam * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**st: Why F-test with regression output***From:*Nick Winter <nwinter@virginia.edu>

**Re: st: Why F-test with regression output***From:*Steven Samuels <sjsamuels@gmail.com>

**Re: st: Why F-test with regression output***From:*Richard Williams <richardwilliams.ndu@gmail.com>

- Prev by Date:
**Re: st: MIME-Version: 1.0** - Next by Date:
**Re: st: Midas** - Previous by thread:
**Re: st: Why F-test with regression output** - Next by thread:
**st: Computer Changes Affecting Stata** - Index(es):