Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fw: st: comparing coefficients across models


From   David Hoaglin <dchoaglin@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: Fw: st: comparing coefficients across models
Date   Sat, 4 Aug 2012 22:07:56 -0400

Dalhia,

Thanks for the additional information.

I have thought about your model a bit, and I don't yet see why adding
x3*group_dummy would produce high collinearity among the predictors.
I have not used an expression such as x3*group_dummy as a predictor.
I suppose that is all right (pardon this gap in my experience with
Stata).  It's more difficult to think about what is going on in the
predictors in the presence of the fixed effects.

You can fit the same model separately to the two groups and compare
the two coefficients of x3.  Since the two models have the same list
of predictor variables, the coefficient of x3 reflects adjustment for
the contributions of x1 and x2 (and the intercept) in each model, but
those adjustments are not constrained to be the same in the two
models.  Thus, fitting the separate models is close to using a model
that includes group_dummy (to get a separate intercept for group 2),
x1*group_dummy, x2*group_dummy, and x3*group_dummy.  I say "close to"
because each model has its own set of fixed effects.  Fitting the
model in which the predictors are x1, x2, x3, and x3*group_dummy has
the advantage of making the same adjustments in the two groups.  It
also uses maximizes the number of residual degrees of freedom, though
the sample sizes in the two groups are large enough that this is not
very important.

It is generally a good idea to check whether error variance is
constant, both within groups and across the ranges of x1 and x3, but
collinearity is a property of the predictor variables.

David Hoaglin

On Sat, Aug 4, 2012 at 4:45 PM, Dalhia <ggs_da@yahoo.com> wrote:
>
> David,
>
>
> Thank you so much for helping me think this through. I very much appreciate it.
>
> Here are the sample sizes for group_dummy (0/1)
>
> group dummy
>             |      Freq.     Percent        Cum.
> ------------+-----------------------------------
>           0 |      6,298       81.85       81.85
>           1 |      1,397       18.15      100.00
> ------------+-----------------------------------
>       Total |      7,695      100.00
>
> Your assumptions are correct. Business group dummy is different from group_dummy. Also, the group_dummy in the interaction model is 0 when group==1 and 1 when group ==2. Also, I am only interested in whether the slope against x3 differs when group_dummy=0/1. I am not intersted in the intercept for the group dummy. From what I understand, I can get at the slope for x3 by running the regression for the two groups separately, and then comparing the coefficients for x3. Is this correct? Or are you saying something different?
>
> As you suggested I also looked at the graphs for the relationship between the two variables that are highly correlated (degree centrality and betweenness centrality) for (1) the whole sample, (2) for group_dummy==0 and (3) for group_dummy==1. The three graphs look extremely similar with a negative, slightly curving slope (I can't seem to figure out how to attach a file, and not get my mail bounced from statalist). Also, I apologize, but I made a mistake in my earlier email, the high correlation is between x1 degree centrality and x3 betweenness centrality, and not between x1 and x2.

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index