Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: perform tests with marginal effects


From   Maarten Buis <[email protected]>
To   [email protected]
Subject   Re: st: perform tests with marginal effects
Date   Tue, 22 Jan 2013 10:50:55 +0100

On Mon, Jan 21, 2013 at 7:03 PM, Andreas Fagereng wrote:
> I run a probit of a dummy variable on the LHS and two dummy variables
> on the RHS.
> To get the marginal effects I use the mfx.
> I then want to test wether the marginal effect of landsdel1 = e.g. 0.4
> When I do this however stata seems to take the coefficients form the
> probit regression and not the marginal effects.
>
> How do I make the marginal effects testable?

You'll need to use the -margins- command and estimate your model with
the factor variable notations. See -help margins- and -help
fvvarlist-. Below is an example.

*------------------ begin example ------------------
sysuse nlsw88, clear

gen byte marst = !never_married + married  ///
    if !missing(never_married, married)
label variable marst "marital status"
label define marst 0 "never married"       ///
                   1 "divorced or widowed" ///
                   2 "married"
label value marst marst

probit union i.marst
margins , dydx(*) post
test 2.marst = -.05
*------------------- end example -------------------
* (For more on examples I sent to the Statalist see:
* http://www.maartenbuis.nl/example_faq )

In general, if all you are going to do is report and test one marginal
marginal effect per parameter, either the average marginal effect
(-margins-) or the marginal effect evaluated at the average of the
explanatory variables (-mfx-), than you are better of using a linear
probability model. In essence, such marginal effects estimate a linear
model of your non-linear model of your data. If you are only
interested in the results of your linear model, than it is better to
cut out the middle man, and directly estimate a linear model of your
data. It means less modeling, so less opportunity for making errors,
and it is more honest about what your model and its limitations.

Notice that I am not saying that a linear probability model is a good
model, all I am saying is that it is a better model than estimating a
non-linear (e.g. probit) model and than only interpret one marginal
effect per variable. Personally, I prefer the logit model and
interpret the results as odds ratios, as some people on this list know
by now. However, there are situations where it does not matter which
model you choose, as they will lead to exactly the same predictions.
For example if your two indicator variables (I prefer the term
indicator variable over dummy variable) are mutually exclusive, like
the two indicator variables for -marst- in the example below, than a
linear probability model, a risk-ratio model (-poisson-), a -logit-,
and a -probit- model will result in exactly the same predictions. In
this special situation, there is thus no reason to prefer one over the
other, and you can just safely choose whichever model directly gives
you the parameters you are looking for.

*------------------ begin example ------------------
sysuse nlsw88, clear

gen byte marst = !never_married + married  ///
    if !missing(never_married, married)
label variable marst "marital status"
label define marst 0 "never married"       ///
                   1 "divorced or widowed" ///
                   2 "married"
label value marst marst

// linear probabiltiy model
reg union i.marst, vce(robust)
predict pr_lpm

// The constant means: never married has a 31%
// chance of being a union member
// 1.marst means: divorced have 4% points less
// chance of being a union member
// 2.marst means: married have a 8% points less
// chance of being a union member


// risk ratio model
poisson union i.marst, vce(robust) irr
predict pr_rr

// The constant means: never married has a 31%
// chance of being a union member
// 1.marst means: the chance of being a union
// member is 14% less when divorced (.86-1)*100%=-14%
// 2.marst means: the chance of being a union
// member is 25% less when married

// Notice the difference between % point changes
// and % changes. If we start with a baseline
// value of 1% and change by 1 & point, then the
// result will be 1 + 1 = 2%.  If we change the
// baseline value by 1%, the result will be
// 1 * 1.01 = 1.01%.

//logit model
logit union i.marst, or
predict pr_logit

// The constant means: there are .44 union members
// for every non-union member among never married
// 1.marst means: this odds of being a union member
// is 19% less when divorced
// 2.marst means: this odds of being a union member
// is 33% less when married

//probit model
probit union i.marst
predict pr_probit

// you could try to interpret the probit coefficients
// in terms of a latent propensity of being a union
// member, but I have never seen an application where
// this type of interpretation convinced me

// In this case all these models lead to exactly the
// same predicted probiblities
tab pr_lpm   pr_probit
tab pr_rr    pr_probit
tab pr_logit pr_probit
*------------------- end example -------------------
* (For more on examples I sent to the Statalist see:
* http://www.maartenbuis.nl/example_faq )

Hope this helps,
Maarten

---------------------------------
Maarten L. Buis
WZB
Reichpietschufer 50
10785 Berlin
Germany

http://www.maartenbuis.nl
---------------------------------
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index