Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: -mfx-, -xtprobit- and Arulampalam (1999)


From   "Ken Clark" <[email protected]>
To   "[email protected]" <[email protected]>
Subject   st: -mfx-, -xtprobit- and Arulampalam (1999)
Date   Mon, 11 Aug 2008 14:10:08 +0100

Could anyone help with interpreting the marginal effects that Stata produces after -xtprobit-?  Arulampalam (1999) suggests that in order to compute the marginal effects correctly, the coefficients produced by standard software, including Stata, need to be adjusted to take account of the fact that the variance of the error term in the random effects model is not unity.  However, it seems to me that -mfx- does not do this.  More details below.

I'm estimating a random effects probit model:

	y*_it = x'b + u_i + v_it

using -xtprobit-.  After estimation I'd like to get the marginal effects of the independent variables using -mfx-.  Consider this example which is a simplified version of the model described in Wooldridge (2005).

. xtprobit union mar ,i(nr)

OUTPUT OMITTED

Random-effects probit regression                Number of obs      =      4360
Group variable: nr                              Number of groups   =       545

Random effects u_i ~ Gaussian                   Obs per group: min =         8
                                                               avg =       8.0
                                                               max =         8

                                                Wald chi2(1)       =      1.45
Log likelihood  = -1672.5134                    Prob > chi2        =    0.2293

------------------------------------------------------------------------------
       union |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         mar |   .0975014   .0811062     1.20   0.229    -.0614638    .2564666
       _cons |   -1.43162   .1001505   -14.29   0.000    -1.627911   -1.235328
-------------+----------------------------------------------------------------
    /lnsig2u |   1.094767   .1137968                      .8717297    1.317805
-------------+----------------------------------------------------------------
     sigma_u |   1.728724   .0983616                        1.5463     1.93267
         rho |   .7492784   .0213779                      .7051055    .7888163
------------------------------------------------------------------------------
Likelihood-ratio test of rho=0: chibar2(01) =  1492.96 Prob >= chibar2 = 0.000

. 
. mfx compute, predict(pu0)

Marginal effects after xtprobit
      y  = Pr(union=1 assuming u_i=0) (predict, pu0)
         =  .08244415
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
     mar*|   .0149562       .0127    1.18   0.239  -.009928   .03984   .438991
------------------------------------------------------------------------------
(*) dy/dx is for discrete change of dummy variable from 0 to 1


Looking at this output, it's fairly easy to figure out where the marginal effect (0.0149562) is coming from.  mar is a dummy variable so the marginal effect is the predicted value when mar==1 minus the predicted value when mar==0.  This is calculated as: 

. di normal(-1.43162+0.0975014)-normal(-1.43162)
.01495619

However this would seem to assume that the variance of the error term in this model is unity, and while this is true in a simple binary probit, I think the composite nature of the error means that in the random effects version of the model this is not so.  The variance of the error term is actually (in Stata speak) 1+e(sigma_u)^2 so the marginal effects calculation has to be adjusted appropriately:

. di normal((-1.43162+0.0975014)/((1+e(sigma_u)^2)^0.5))-normal((-1.43162)/((1+e(sigma_u)^2)^0.5))
.0153243

(Note that Arulampalam (1999) suggests multiplication of the coefficients by sqrt(1-e(rho)) but this is equivalent to what I've done here - see also the discussion of average partial effects in Wooldridge (2005))

The difference between what -mfx- produces and the "corrected" method is small in this example but is much larger (a 30% difference) in the real world application I'm working on.  The potential for incorrect inferences is clear.

One further point is worth making. Look at the -mfx- output where the line 

y  = Pr(union=1 assuming u_i=0) (predict, pu0)
         =  .08244415

is supposed to tell us the predicted value of union at the sample mean of the independent variable(s).  This is just 

. di normal(-1.43162+0.0975014*.438991)
.08244409

but is miles off the mean of the dependent variable

. su union if e(sample)

    Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
       union |      4360    .2440367    .4295639          0          1


whereas making the correction gives something much closer

. predict index, xb

. gen phata=normal(index/((1+e(sigma_u)^2)^0.5))

. su phata

    Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
       phata |      4360     .243463    .0076058   .2367358   .2520601


Comments welcome.

Ken



___
References

Arulampalam, W. (1999), A note on estimated coefficients in the random effects probit model, Oxford Bulletin of Economics and Statistics, 61, 597-602.

Wooldridge, J. (2005), Simple solutions to the initial conditions problem in dynamic, nonlinear panel data models with unobserved heterogeneity, Journal of Applied Econometrics, 20, 39-54.


*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index