Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
I wonder whether the relation between group_dummy and x2 is part of
the problem. Also, what are the relative sample sizes for the two
values of group_dummy?
Without x3*group_dummy in the model, you would be fitting a slope
against x1, an offset for x2, and a slope against x3. When you
include x3*group_dummy, you are fitting an additional slope against x3
for the two groups defined by group_dummy (i.e., if b3 is the
coefficient of x3 and b4 is the coefficient of x3*group_dummy, the
slope against x3 is b3 when group_dummy = 0 and b3 + b4 when
group_dummy = 1).
You aren't including group_dummy itself as a predictor, so I assume
that you don't want different intercepts for those two groups.
You have few enough variables that you should be able to diagnose the
problem by looking at how x1 and x3 are each related to x2 and
plotting x3 against x1 (overall, within the two groups defined by x2,
and within the two groups defined by group_dummy). Also, as I
suggested above, look at a crosstab of x2 and group_dummy.
On Fri, Aug 3, 2012 at 8:06 AM, Dalhia <email@example.com> wrote:
> Sorry about the confusion. Typo.
> Here is what I should have said:
> This is how I ran the interaction model:
> xtreg y x1 x2 x3 x3*group_dummy, fe robust
> where y is log of Tobin's q (a measure of firm performance)
> x1 is degree centrality (a network measure - continuous)
> x2 is business group dummy (codes whether or not a firm belongs to a cluster of firms)
> x3 is betweenness centrality (a network measure - continuous)
> group_dummy is whether or not the firm belongs to a particular component in the network.
> x2 and x3 are the most highly correlated (-0.73).
* For searches and help try: