[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: testing for mediation

Subject   Re: st: testing for mediation
Date   16 Jul 2009 08:41:09 -0500

Dear Dr. Buis,
Thank you so much for your prompt reply and help. I have been able to to download the ldecomp files for use. I am still stuck because the predictor for which I wish to decompose total effects into direct and indirect is not categorical, but rather is a continuous normally distributed variable (as is the postulated mediating variable). If I have understood the ldecomp approach correctly, it works only when the predictor of interest is categorical.
Do you have any suggestions? I greatly appreciate your advice,

John Schousboe

On Jul 16 2009, Maarten buis wrote:

Mediation is harder when dealing with non-linear models like -logit- than in linear models, so the type of tricks you used below won't work in your case. This is discussed in , which discusses one way of doing this test using -ldecomp-, which you can download from SSC by typing in Stata -ssc install ldecomp-.

Hope this helps,

Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen

--- On Thu, 16/7/09, wrote:

I would be grateful for any assistance from anyone on
proper use of SUEST command to test for mediation of the
effect of one predictor on a dependent variable from another
predictor. To be clear, I wish to test for mediation, not
effect modification. I think x1 is a cause of y and x2 is
also a cause of y. I also know that x1 is a cause of x2. By
Barony and Kenney’s rules of mediation, one of the
necessary criteria to say that x2 mediates the effect of x1
on y is that the parameter estimate for x1 changes
significantly when x2 is added to the regression. The data I
have used is all from the same study sample.

Here is the code I have used;
•    logit y x1 x2 x3
•    estimates store A
•    logit y x2 x3
•    estimates store B
•    suest A B
•    testnl [A]x1=[B]x2

When I do this, the 95% confidence intervals of the
parameter estimates for x1 when x2 is or is not included
overlap considerably;
0.23 (95% C.I. 0.01 to 0.46) when x2 is included, and
0.33 (95% C.I. 0.11 to 0.54) when x2 is not included as a

And yet the chi-square and p-value for the test,
respectively, are 6.37 and 0.012. How can this test be
significant when the confidence intervals overlap so much?
Any ideas how I am misusing this test?

I would appreciate any guidance or help anyone can give

John Schousboe MD, PhD

*   For searches and help try:

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2015 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index