Stata does margins. Does estimated marginal means. Does least-squares means. Does average and conditional marginal/partial effects, as derivatives or elasticities. Does average and conditional adjusted predictions. Does predictive margins. Does contrasts and pairwise comparisons of margins. Does more. Margins are statistics calculated from predictions of a previously fit model at fixed values of some covariates and averaging or otherwise integrating over the remaining covariates.
If that sounds overly technical, try this. margins answers the question, “What does my model have to say about such-and-such a group or such-and-such a person”, where such-and-such might be:
It answers these questions either conditionally—based on fixed values of covariates—or averaged over the observations in a sample. Any sample.
It answers these questions about any prediction or any other response you can calculate as a function of your estimated parameters—linear responses, probabilities, hazards, survival times, odds ratios, risk differences, etc.
It answers these questions in terms of the response given covariate levels or in terms of the change in the response for a change in levels, a.k.a. marginal effects.
It answers these questions providing standard errors, test statistics, and confidence intervals and those statistics can take the covariates as given or adjust for sampling, a.k.a predictive margins and survey statistics.
marginsplot can graph any of these margins or comparisons of margins.
Say that we are interested in the outcome y based on a person’s gender and packages of cigarettes smoked per day. Using Stata’s factor-variable notation, we can fit a logistic regression by typing
. logistic y sex##smokes age Logistic regression Number of obs = 360 LR chi2(6) = 28.68 Prob > chi2 = 0.0001 Log likelihood = -213.39423 Pseudo R2 = 0.0630
|y||Odds Ratio Std. err. z P>|z| [95% conf. interval]|
|female||2.325059 .9008408 2.18 0.029 1.088021 4.968563|
|1 to 2||2.323504 .9175207 2.13 0.033 1.071557 5.038155|
|2+||4.072109 1.618498 3.53 0.000 1.868535 8.874368|
|Female#1 to 2||1.192058 .6978927 0.30 0.764 .3784073 3.755217|
|Female#2+||.273245 .1520516 -2.33 0.020 .0918094 .8132368|
|age||.9005478 .0459045 -2.06 0.040 .814925 .9951667|
|_cons||7.294038 8.152778 1.78 0.075 .8157424 65.22034|
The interaction between sex and smokes makes interpretation difficult. We can use margins to decipher their effects:
. margins sex smokes, post Predictive margins Number of obs = 360 Model VCE: OIM Expression: Pr(y), predict()
|Margin std. err. z P>|z| [95% conf. interval]|
|Male||.6203166 .0354237 17.51 0.000 .5508874 .6897459|
|Female||.7156477 .0328208 21.80 0.000 .6513202 .7799752|
|0||.5464742 .0451283 12.11 0.000 .4580243 .6349242|
|1 to 2||.7406711 .039336 18.83 0.000 .6635739 .8177683|
|2+||.7131303 .0400748 17.79 0.000 .6345851 .7916755|
We obtain predictive margins. If the distribution of cigarettes smoked remained the same in the population, but everyone was male, we would expect about 45% to have a positive outcome for y. If everyone were female; 55%. If instead the distribution of males and females were as observed but no one smoked, we would expect about 39% to have a positive outcome.
Is there a significant difference in the probability of a positive outcome between males and females? We can run tests after margins to find out:
. test 0.sex = 1.sex ( 1) 0bn.sex - 1.sex = 0 chi2( 1) = 3.87 Prob > chi2 = 0.0491
We find evidence at the 5% significance level that the predictive margins differ for males and females.
How do sex and the number of cigarettes smoked interact? After fitting our logistic regression, we can obtain predictive margins for each of the levels in the interaction of these variables:
. margins smokes#sex Predictive margins Number of obs = 360 Model VCE: OIM Expression: Pr(y), predict()
|Margin std. err. z P>|z| [95% conf. interval]|
|0#Male||.4415025 .0654681 6.74 0.000 .3131873 .5698176|
|0#Female||.6449343 .0630512 10.23 0.000 .5213562 .7685125|
|1 to 2#Male||.6447831 .0652375 9.88 0.000 .5169199 .7726463|
|1 to 2#Female||.8326533 .0457744 18.19 0.000 .7429373 .9223694|
|2+#Male||.7596062 .0525805 14.45 0.000 .6565504 .862662|
|2+#Female||.6686816 .0599844 11.15 0.000 .5511142 .7862489|
An easy way to look at this interaction is to graph it using Stata's marginsplot.
Let’s see an example of marginal effects. Because of Stata’s factor-variable features, we can get average partial and marginal effects for age even when age enters as a polynomial:
. webuse nlsw88, clear (NLSW, 1988 extract) . quietly probit union wage c.age c.age#c.age collgrad . margins, dydx(age) Average marginal effects Number of obs = 1,878 Model VCE: OIM Expression: Pr(union), predict() dy/dx wrt: age
|dy/dx std. err. z P>|z| [95% conf. interval]|
|age||.0015255 .0032462 0.47 0.638 -.0048369 .0078879|
We are using different data than before. The probability that a person is in a union increases by 0.0015 as age increases by one year. By default, margins reports average marginal (partial) effects, which means effects are calculated for each observation in the data and then averaged.
Alternatively, if we wanted effects at the average of the covariates, we could type
. margins, dydx(age) atmeans
Stata’s margins includes options to control whether the standard errors reflect just the sampling variation of the estimated coefficients or whether they also reflect the sampling variation of the estimation sample. In the latter case, margins can account for complex survey sampling including weights, sampling units, pre- and poststratification, and subpopulations.
margins works after EVERY Stata estimation command except exact logistic and exact Poisson; nested logit; structural vector autoregressive models; state space; unobserved components; and Bayesian estimation.