Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Fitzgerald, James" <J.Fitzgerald2@ucc.ie> |

To |
"statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> |

Subject |
st: RE: comparing coefficients across models |

Date |
Thu, 2 Aug 2012 21:30:56 +0000 |

Hi Dalhia I asked the exact same question yesterday and received a number of very useful answers. I decided to use the method which tests differences in unstandardised coefficients. I have pieced together the relevant parts of the replies posted so you can make your own decision that suits your needs. HTH James Hi James, >>> >>> Typically the effect of a predictor in two different groups can be >>> compared with the unstandardized beta. You can do a statistical test >>> of the difference in the betas using the z-score formula below. I >>> usually just calculate the difference between unstandardized betas >>> from two different models by hand, though Stata might have a command >>> to do this for you. Is that what you are looking for: the Stata >>> command? >>> z = (B1 - B2) / √(seB1^2 + seB2^2) (as far as I know, Stata does not have such a command) >>> In terms of comparing the *magnitude* of the effect in the two >>> different subsamples, it is more correct to do this qualitatively by >>> comparing the *standardized* beta for the variable of interest >>> against effect size rules of thumb for small/medium/large (which >>> sometimes differ by discipline, such as social >>> sciences/education/engineering). Just report the standardized beta >>> as the effect size in each group; it would be a qualitative >>> statement about the effect in each group. >>> >>> Here are rules that I have: >>> Standardized regression coefficients: >>> * Keith’s (2006) rules for effects on school learning: .05 = too >>> small to be considered meaningful, .above .05 = small but meaningful >>> effect, .10 = moderate effect, .25 = large effect. >>> * Cohen’s (1988) rules of thumb: .10 = small, .30 = medium, > (or >>> equal to) .50 = large >>> Dahlia, xtreg does not support the -beta- option, so to calcualte the standardised coefficients you can simply standardise your variables first egen stdy=std(y) egen stdx1=std(x1) egen stdx2=std(x1) and then rerun your regressions xtreg stdy stdx1 stdx2, fe robust if group==1 xtreg stdy stdx1 stdx2, fe robust if group==2 The coefficients generated in each case can be interpreted as "a 1 standard deviation change in x1 results in a B1 change in y". However, another a user reposted the following: > I'm a bit doubtful about Beta weights (standardised coefficients). They have largely gone out of style since the 1970s or so at least in sociology and political science, because they standardize on the basis of standard deviations. It follows that one cannot compare Beta weights between models if the runs are conducted on samples with different variable standard deviations. > > Why not instead just compare the size of the unstandardized coefficients? Often since regression coefficients (including those reported by Stata's fixed-effect routines) are in units of the dependent variable, one can say how much change a one unit change in an explanatory variable produces in a dependent variable. Suppose, for example, your dependent variable is in dollar amounts. Then the (unstandardized) coefficient on an explanatory variable is equal to the change in dollars a one unit change in an explanatory variable would produce. > > Or if your dependent variable is not so easily interpreted and if you can't get Stata to produce elasticities, other options probably are available. Again if you log both the dependent variable and an explanatory variable, the coefficient on that explanatory variable will be an elasticity equal to the percentage change in a dependent variable attributable to a one percent change in the logged explanatory variable. If logged variables won't be appropriate for many possible reasons (such as zero values in the variables), any decent introductory econometrics book will show you how to compute elasticities at the mean by multiplying a regression coefficient by the ratio of the mean of the dependent and the explanatory variables. The fact you are estimating with fixed-effects rather than OLS does not matter. > > Dave Jacobs Thanks David. That really clears things up. My dependent variable is easy to interpret so I will just use the unstandardised coefficients. Do I just compare them qualitatively or is their a test I can employ with confidence intervals? Regards James > Referees (or dissertation committees) will be more impressed with your work if you can say that an appropriate statistical test shows that regression coefficient X is significantly larger than coefficient Y. > If you want to test for differences in coefficient size and the dependent variables are correlated, use the two equation estimator called -sureg-. If -sureg- won't work, consider the Stata test called -suest-. > > Dave Jacobs Hi Dave, > > Thank you for your response. > > I tried using margins, eyex to calculate elasticities but I got the following error message: > > "Could not calculate numerical derivatives - - discontinuous region with missing values encountered". > > Have you any suggestions as to why I am receiving this error? Stata says this is the generic form of r(459), but does not go into any specifics as to what is actually causing the error to occur. > > Also, suest and sureg do not support xtreg. > > Best Regards > > James Oops, I neglected to read the bottom of your last comment about -suest-. > > Since an OLS model with separate dummy variables included for each case will provide the same coefficients as Stata's fixed-effects routines (although the OLS t-values will be a bit different than the more appropriate ones from -xtreg-), I don't see any reason why you could not estimate a fixed-effects model using OLS with the case dummies included and then test those coefficients with -suest-. > > Maybe one of the econometrically more Knowledgeable list members will know of objections to this OLS replacement test procedure, but until I hear otherwise, I suspect it will be OK. > > Dave J. No problem Dave! I'm not sure the separate dummy variables approach will work though, as I have an unbalanced panel with 1167 firms over 1 to 20 years. Thus, the degrees of freedom would be severely affected by the inclusion of dummies. James ________________________________________ From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of Dalhia [ggs_da@yahoo.com] Sent: 02 August 2012 21:42 To: statalist@hsphsun2.harvard.edu Subject: st: comparing coefficients across models Hello, I have two groups and need to run the same regression model on both groups (number of observations differ but variables are all the same). I am interested in the difference in the coefficients for one particular variable. here are the two models I am running: xtreg y x1 x2, fe robust if group==1 xtreg y x1 x2, fe robust if group==2 I cannot put the two groups together, and run the model with an interaction with dummy variable for group, because then the model becomes highly multicollinear, and the coefficients are not stable. Is there a test that can help me check if the coefficient on x2 differs significantly across the two groups? Thanks I really appreciate your help. I just spent half a day on google trying to figure out voung test, clarke test, and chow test, and none appear to do what i need. Your help is much appreciated. Dalhia Mani * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**Re: st: RE: comparing coefficients across models***From:*Dalhia <ggs_da@yahoo.com>

**References**:**Re: st: Odd behaviour of wordcount function***From:*"Seed, Paul" <paul.seed@kcl.ac.uk>

**st: comparing coefficients across models***From:*Dalhia <ggs_da@yahoo.com>

- Prev by Date:
**Re: st: specifying a single value in xlabel while leaving the rest as-is** - Next by Date:
**Re: st: Interval censoring using intcens** - Previous by thread:
**st: comparing coefficients across models** - Next by thread:
**Re: st: RE: comparing coefficients across models** - Index(es):