Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

re: Re: st: Why does a non-statistically significant covariate in a a regression model become significant in margins?


From   "Ariel Linden, DrPH" <ariel.linden@gmail.com>
To   <statalist@hsphsun2.harvard.edu>
Subject   re: Re: st: Why does a non-statistically significant covariate in a a regression model become significant in margins?
Date   Fri, 14 Dec 2012 12:42:23 -0500

Thanks, Maarten.

I had already reviewed the relationship between treatment and other
covariates to try and isolate the issue, but it still remained unclear to me
why there was a difference between the regression table result and the
margins result. 

I'll continue to work on this and report back if I can figure it out...

Ariel



From
  Maarten Buis <maartenlbuis@gmail.com>
To
  statalist@hsphsun2.harvard.edu
Subject
  Re: st: Why does a non-statistically significant covariate in a a
regression model become significant in margins?
Date
  Fri, 14 Dec 2012 09:55:24 +0100
________________________________________
On Thu, Dec 13, 2012 at 9:48 PM, Ariel Linden, DrPH wrote:
> I am getting conflicting results from the regression output for a
covariate
> and when I subsequently run margins. More specifically, I ran -zinb- with
> the primary covariate of interest being treatment and got a p value of
> 0.152. I then ran margins to see the predicted values for treatment and
> non-treatment, and then ran contrasts. The contrast indicates that the
> difference between treatment and control is statistically significant (p
> value =  0.0048). See below for the output...
>
> Could this be due to the different methods of estimating the p values
> between the original regression and margins? If so, how do I reconcile the
> two? Obviously, the covariate can't be both significant and
not-significant
> at the same time.

I would say the difference is that they test different null
hypotheses: the p-values for IRRs are for a test that the ratio of
means is equal to 1 regardless of the value of all other variables in
your model, the p-value for your contrast is that the average
difference in means equals 0. In most cases I would expect that these
different test will lead to fairly similar conclusions, but I can
imagine distributions for the control variables where the two can lead
to different conclusions.

Sometimes these different conclusions are a sign of a problem in the
model or the data and sometimes it is an accurate description of the
state of the world. In order to pin this down I would look at the
distributions of the explanatory variables (I like Nick's -stripplot-
from SSC) in combination with graphs or predicted counts
(-marginsplot-) for treat and nottreated against different control
variables to see where this difference comes from and whether that is
feasible.

Hope this helps,
Maarten

---------------------------------
Maarten L. Buis
WZB
Reichpietschufer 50
10785 Berlin
Germany

http://www.maartenbuis.nl



*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index