Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: comparing coefficients across models


From   Cameron McIntosh <cnm100@hotmail.com>
To   STATA LIST <statalist@hsphsun2.harvard.edu>
Subject   RE: st: comparing coefficients across models
Date   Tue, 7 Aug 2012 19:11:16 -0400

Dahlia,

I also agree that one shouldn't treat conventional VIF thresholds as golden rules. You may want to see:

O'Brien, R.M. (2007). A Caution Regarding Rules of Thumb for Variance Inflation Factors. Quality & Quantity, 41, 673–690.
http://web.unbc.ca/~michael/courses/stats/lectures/VIF%20article.pdf

Stine, R.A. (1995). Graphical interpretation of variance inflation factors. The American Statistician, 49(1), 53-56.
http://statistics.wharton.upenn.edu/documents/research/Variance%20inflation%20factors.pdf

Hendrickx, J. (August 16, 2004). COLDIAG2: Stata module to evaluate collinearity in linear regression.
http://ideas.repec.org/c/boc/bocode/s445202.html

Hendrickx, J. (January 2, 2012). Tools for evaluating collinearity: Package ‘perturb’, Version 2.04.
http://cran.r-project.org/web/packages/perturb/perturb.pdf
http://cran.r-project.org/web/packages/perturb/index.html

Friendly, M., & Kwan, E. (2009). Where's Waldo? Visualizing Collinearity Diagnostics. American Statistician, 63(1), 56-65.
http://www.datavis.ca/papers/viscollin-web.pdf

Cam

> Date: Tue, 7 Aug 2012 18:14:05 -0400
> Subject: Re: Fw: st: comparing coefficients across models
> From: dchoaglin@gmail.com
> To: statalist@hsphsun2.harvard.edu
> 
> Dalhia,
> 
> The VIF that I am familiar with (from literature on regression
> diagnostics) applies separately to each variable in the regression, so
> I'm not sure which variable's VIF you are reporting. (The basic idea
> is that you can regress each predictor on the set of other predictors
> and get an R^2. You can interpret the corresponding 1 - R^2 as the
> "usable fraction" of that predictor. The VIF for a predictor is the
> reciprocal of its 1 - R^2.)
> 
> A VIF greater than 10 is not encouraging, but I don't think of 10 as
> the threshold for high collinearity, with larger values considered
> unacceptable. In your model, the VIF of interest is the one for
> x3*group_dummy. If you have the actual VIF for that predictor, I
> would like to see it, along with the VIFs for the other predictors.
> 
> A "large" VIF is only part of the story. Further analysis, based on
> the singular value decomposition, can show which predictors are
> involved in the near dependency.
> 
> David Hoaglin
> 
> On Tue, Aug 7, 2012 at 10:31 AM, Dalhia <ggs_da@yahoo.com> wrote:
> > David, thanks once again for helping me think this through. I reran with regress, and the VIF coefficient is less than 10 when I run model separtely for the two groups, and higher than 10 when I rerun with whole sample, and with the interaction with the dummy. I also checked the multicollinearity for the fixed effects panel model by running vif, uncentered (same finding as above).
> 
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
 		 	   		  
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index