Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: Random effects probit


From   "Wiji Arulampalam" <[email protected]>
To   <[email protected]>, <[email protected]>
Subject   Re: st: Random effects probit
Date   Wed, 18 Sep 2002 16:02:16 +0100

I am really confused now. 

I am dealing with models which have many variables . The models are estimated using very large datasets. This is the first time I had used xtprobit with lagged dependent variable (LDV) as a regressor. When I didn't get any std. errors for rho or ln(sig2) but got proper std. errors for the other coefficient estimates and when these coefficient estimates were the same as the ordinary probit, I automatically assumed that there was no unobserved heterogeneity! Note, this problem only occurs when I have an LDV. But my conclusions were obviously wrong - this I found out when I checked it on Limdep. In the past I have always used Limdep for RE probit and I have had no problems at all. The only reason I used xtprobit was because of the size of the data set and model and I thought that it would be easier given that my data was stored as a stata file.

It seems a bit too much to ask us to try different starting values to check for sensitivity of the results when one is estimating a very large model using a lot of data. How is one supposed to decide whether the fact that the program gave the same answer as the ordinary probit was because of unstable optimisation or whether it was because of no unobserved heterogeneity?  How would I discriminate between these two?

many thanks
best
wiji





=================================
Professor Wiji Arulampalam,
Department of Economics,
University of Warwick,
Coventry,
CV4 7AL,
UK.
Tel: +44 (24) 7652 3471
Sec. Tel: +44 (24) 7652 3202
Fax: +44 (24) 7652 3032
email:  [email protected]
http://www.warwick.ac.uk/Economics/arulampalam/
RES2003: http://www.warwick.ac.uk/res2003/

>>> [email protected] 09/17/02 08:44PM >>>
In the recent conversation about the results from -xtprobit-, Wiji
Arulampalam <[email protected]> and others queried how
-xtprobit- obtains its starting values.

As James Hardin <[email protected]> nicely explained, 
[...]
>   

In this model, the starting value for "rho" can determine whether or not the
optimization concludes successfully.  Unfortunately, the grid search method
used in -xtprobit- does not seem adequate for this particular problem.
Since layering a non-linear optimization method on top of an approximation
often brings difficulties in the estimation process, the importance of
starting values is not too surprising.  Still, we have been looking into
other methods of optimization and quadrature techniques.

With difficult data, e.g., when the quadrature check indicates unstable
results, it may be possible to obtain better estimates by re-estimating the
model using a one-dimensional grid of starting values.  The starting values
are constructed by augmenting the converged cross-sectional -probit- results
with a grid of values for "rho".  The optimization is then performed in a
loop.  Afterwards, if the largest log-likelihood from the converged results
has a full rank estimated VCE, then we have a candidate solution that should
be checked by -quadchk-.

Here we implement this approach via a -forvalues- loop:

----------------------------Begin do-file-----------------------------------

qui probit sue lague
mat b = e(b)

forvalues rho = 0.1(0.1).9 {
	local lnsig2u = ln(`rho'/(1-`rho'))
	mat b1 = b, `lnsig2u'
	di 
	di as res "rho = " `rho' _c
	qui xtprobit sue lague, i(ind1) quad(20) from(b1, copy)
	di as txt _skip(10) "Log likelihood = " as res %8.0g e(ll) 
	di as txt "coef." as res _col(15) %8.0g _b[lague] /*
		*/ _col(30) %8.0g [sue]_cons _col(45) %8.0g [lnsig2u]_cons /*
		*/ _col(60) %8.0g e(rho)
	di as txt "std. err." as res _col(15) %8.0g _se[lague] /*
		*/ _col(30) %8.0g [sue]_se[_cons] /*
		*/ _col(45) %8.0g [lnsig2u]_se[_cons]
}

----------------------------End do-file-------------------------------------

Since this is only an example, we want to make the output shorter.  We
achieve this by only displaying the estimated coefficients, their standard
errors and the log-likelihood.  In other implementations, it might be better
to display all the output, but this is a matter of preference.  Note that an
estimated standard error of 0 indicates results with than full rank
estimated VCE.

The results from the above method are:

		lague		_cons	     lnsig2u	      rho

rho = .1          Log likelihood = -321.537
coef.          .978465       -2.62413        .094874        .523701
std. err.      .257994        .269102         .46741

rho = .2          Log likelihood = -324.476
coef.          1.50498        -2.1291       -1.44237        .191179
std. err.      .179505        .065135              0

rho = .3          Log likelihood = -323.981
coef.          1.44582       -2.16482       -1.24305        .223905
std. err.      .182259        .067259              0

rho = .4          Log likelihood = -323.855
coef.           1.4304       -2.17465        -1.1936        .232617
std. err.      .182962        .067849              0

rho = .5          Log likelihood = -321.537
coef.          .978465       -2.62413        .094874        .523701
std. err.      .257992        .269095        .467398

rho = .6          Log likelihood = -321.537
coef.          .978465       -2.62413        .094874        .523701
std. err.      .257994        .269102         .46741

rho = .7          Log likelihood = -326.897
coef.          1.76802       -1.99957       -2.78606        .058082
std. err.      .165957        .057566              0

rho = .8          Log likelihood = -325.316
coef.          1.60022        -2.0773       -1.80932        .140721
std. err.      .174846        .062075              0

rho = .9          Log likelihood = -338.513
coef.          .894369       -6.26237        3.17317        .959812
std. err.         .194              0        .042211


The output shows that when the starting values for rho are 0.1, 0.5 or 0.6,
the optimization converges with 20 quadrature points and the estimated VCE
is of full rank.  Since the solutions with the starting values for rho of
0.1, 0.5 and 0.6 are the same, the results from -quadchk- will also be the
same.

Choosing the solution produced by the starting values with rho=0.6, we can
check the stability of the quadrature.

. qui probit sue lague

. mat b = e(b)

. local rho = .6

. local lnsig2u = ln(`rho'/(1-`rho'))

. mat b1 = b, `lnsig2u'

. xtprobit sue lague, i(ind) quad(20) from(b1, copy) nolog


Random-effects probit                           Number of obs      =      2411
Group variable (i) : ind1                       Number of groups   =       500

Random effects u_i ~ Gaussian                   Obs per group: min =         1
                                                               avg =       4.8
                                                               max =         7

                                                Wald chi2(1)       =     14.38
Log likelihood  = -321.53746                    Prob > chi2        =    0.0001

------------------------------------------------------------------------------
         sue |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       lague |   .9784651   .2579944     3.79   0.000     .4728054    1.484125
       _cons |  -2.624126   .2691015    -9.75   0.000    -3.151556   -2.096697
-------------+----------------------------------------------------------------
    /lnsig2u |   .0948743   .4674101                     -.8212327    1.010981
-------------+----------------------------------------------------------------
     sigma_u |    1.04858   .2450585                      .6632413    1.657799
         rho |   .5237008     .11659                       .305502    .7332122
------------------------------------------------------------------------------

. quadchk, noout

Refitting model quad() = 16
Refitting model quad() = 24

                         Quadrature check

               Fitted       Comparison     Comparison
             quadrature     quadrature     quadrature
             20 points      16 points      24 points
-----------------------------------------------------
Log          -321.53746     -321.54191       -321.535
likelihood                  -.00444993      .00245667   Difference
                             .00001384     -7.640e-06   Relative difference
-----------------------------------------------------
sue:          .97846512       .9789156      .97752771
   lague                     .00045048     -.00093741   Difference
                             .00046039     -.00095804   Relative difference
-----------------------------------------------------
sue:         -2.6241262     -2.6226448     -2.6260355
   _cons                     .00148137     -.00190931   Difference
                            -.00056452       .0007276   Relative difference
-----------------------------------------------------
lnsig2u:       .0948743      .09345529      .09850014
   _cons                    -.00141901      .00362584   Difference
                            -.01495678      .03821729   Relative difference
-----------------------------------------------------

Unfortunately, the relative differences for lnsig2u:_cons appear to be too
large to safely interpret them.   At this point, we could either repeat our
method with a different number of quadrature points or conclude that we cannot
obtain interpretable results from this combination of model and data.


Weihua Guan <[email protected]>
David M. Drukker <[email protected]>

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html 
*   http://www.stata.com/support/statalist/faq 
*   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index