Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Marginal Effects for Logistic Mulitlevel Model withInteractionTerms


From   Richard Williams <[email protected]>
To   [email protected]
Subject   Re: st: Marginal Effects for Logistic Mulitlevel Model withInteractionTerms
Date   Sun, 29 Aug 2010 20:22:26 -0500

A belated followup to Maarten's lengthy post on this:

I notice that, unlike -mfx-, the new -margins- command does not even report marginal effects for interaction terms, e.g.

sysuse auto
logit foreign price mpg c.mpg#c.price, nolog
margins, dydx(*)

This makes sense to me: the interaction term can't change unless one of its component terms changes. So, the marginal effects for price and mpg incorporate the effect caused by the interaction term.

So, does this mean that marginal effects of interaction terms are no longer an issue because Stata is now handling them correctly, blending in the interaction effects with the main effects? Or, would people still want to (somehow) compute a marginal effect for the interaction term?

I used continuous variables in the above, but the same is true for factor variables. You only get marginal effects for each variable used, and not a separate effect for any interactions.

At 02:29 AM 8/20/2010, Maarten buis wrote:
--- On Thu, 19/8/10, [email protected] wrote:
> -mfx does not recognize interaction terms and thus does not
> calculate the marginal effects for interaction terms in the
> correct way (the problems resulting are explained in Ai/
> Norton and Greene (2010).

The problem is that marginal effects try to force a from of
effect onto a model that is not native to that model. The
consequence is that it will always give some problems, like
the difficulty you get with getting interaction effects. It
is like fitting a square peg into a round hole, you might
force it in with a hammer (possible add a bit of duct tape),
but the result will never be nice. The obvious solution is
to use the form of effect that is native to the model, unless
you for substantive reasons really need that other type of
effect. In case of (multilevel) logistic regression that means
that you need to look at odds ratios, and the interaction
effects will give you ratios of odds ratios. This is discussed
in Buis (2010).

So, based on my 2010 article I recommend that you just report
odds ratios, and interpret the interaction terms of ratios of
odds ratios. This article gives you a concrete example of what
such a discussion of results could look like, it is actually
much easier than many people think.

This article has led to quite a bit of private communication
with people who are so used to the idea that interactions are
very hard in non-linear models that they are surprised that
the solution can be so easy. So below I will give a bit longer
explanation that liberally borrows from these private
conversations.

Remember that an effect is nothing other than a comparison of
the expected outcome across real or counter-factual groups. We
observe incomes for a set of males, we find a comparable set
of females who work (that can be hard, which is why we have
models like -heckman-) and a comparison of the average wages
of the males and females is the effect of gender. We can
compare average wages by looking at the difference or at the
ratio, i.e. women earn x euros/pounds/yen/... less then men,
or women earn y% less then men. Both are completely valid
quantifications of the effect of gender on income.

We usually start our statistical (econometric/psychometric/
...) modelling education with a linear model. Linear models
are naturally designed for a comparison of groups in terms of
differences.  If we add a continuous variable x than its
parameter says if we get one more unit of x we can expect b
more units of y, regardless of how many units of y you had to
start with. So we took all possible comparisons that where one
unit apart and constrained the differences in expected y to be
equal. We can relax such assumptions, but it shows that effects
in terms of differences is the native way of thinking about
effects in linear models.

The native way of thinking about effects is in terms of ratios in
non-linear models that include a log in their link function
(e.g. (multi-level) logistic regression that models the log(odds),
Poisson that models the log(count), survival models that model the
log(hazard) or log(time), etc.). If we add a continuous variable
in one of those models we say that expected value of y increases
by a factor of exp(b) when you get an additional unit of x,
regardless of how many units of y you had to start with. Again
this assumption can be relaxed, but it shows that effects in terms
of ratios are the native way of thinking in this type of non-linear
models.

Now you can use effects in terms of ratios in linear models (e.g.
elasticities) and effects in terms of differences in non-linear
models (marginal effects), but since they are not the native way
of thinking in those models and thus there will always be some
friction. You cannot represent an effect that is constant in
terms of ratios with one effect in terms of differences, and
vice versa (except for the trivial case where there is no effect).
This type of problems multiply when looking at interaction
effects. This is why Ai & Norton (2004) had to go through all the
effort and than present their interaction effects in terms of
graphs, while I could just exponentiate my coefficient and had my
interaction effect as one number.

The mathematical proof is straightforward, and can be directly
derived from the properties of the logarithm. A version of that
proof can actually be found in the Stata Journal article of Edward
Norton, Hua Wang, and Chunrong Ai (2004), where they discuss the
implementation of their technique in Stata. In section 2.6 of
(Norton et al. 2004) they showed that the exponentiated
interaction effect is the ratio of odds ratios. They claim that
nobody can understand that, in my article (Buis 2010) I show that
that is actually not that hard.

In short Norton et al. and I don't disagree on the math. The choice
between which method to choose is really one that needs to be based
on substantive and pragmatic reasoning. If your theory gives you a
clue whether or not you want to control for differences in the
baseline odds, than the choice is easy: with level effects you don't
control for such differences, with ratio effects you do. When your
theory is not that precise, then you need to use pragmatic reasoning.
With odds ratios you need to put a bit of effort into explaining
your results as a lot of people find them hard. With marginal effects
you have the awkwardness of trying to force a linear model on top
of a non-linear one. Sometimes it is nice to present both, in the
way that I have done in my article.

Hope this helps,
Maarten

M.L. Buis (2010) "Stata tip 87: Interpretation of interactions in
non-linear models", The Stata Journal, 10(2), pp. 305-308.

Edward Norton, Hua Wang, and Chunrong Ai (2004) "Computing
interaction effects and standard errors in logit and probit
models" The Stata Journal 4(2):154--167.
<http://www.stata-journal.com/article.html?article=st0063>

--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany

http://www.maartenbuis.nl
--------------------------




*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

-------------------------------------------
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME:   (574)289-5227
EMAIL:  [email protected]
WWW:    http://www.nd.edu/~rwilliam

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index