Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: Question about Simulated MLE using d1 method

From   "Zhang, Sisi" <>
To   <>
Subject   st: Question about Simulated MLE using d1 method
Date   Fri, 9 Jul 2010 13:47:56 -0400

Hi Statalisters,

I have a question about how to calculate gradience in Simulated MLE using
d1 method. Shall we calculate gradience in each simulation right after
calculating likelihood, then after all simulations, sum the gradiences up
and take the average? I couldn't figure out how to make it work, even for a
very simple model. 

The following is an ml program for random effect probit model to estimate a
single parameter b. The model is standard: Y*=xb+mu(i)+v(it) with Y=1 if
Y*>0. In this program, when simulate more than once (e.g. $S=20) and
accumulate g1 from all simulations by "replace `g1'=`g1'+...." , the
program can't converge and stays in the same values in every iteration.
 However, if simulate just once ($S=1), the program works fine and
converges quickly (in that case I generate `g1'=normalden(.)... rather than
accumulate `g1'=`g1'+normalden(.)... The program also works fine when
simulate 20 times but just using d0 method and ignore the gradiance part.
So I believe the problem is in the way accumulate `g1'.

program define relnf
            args todo b lnf g 
            tempvar xb lj mu random_mu g1 
            mleval `xb'=`b'
            local y $ML_y1
            set seed 1234567
            gen double `lj'=0
            gen double `g1'=0
            forv s=1/$S{
                    capture drop `random_mu' `mu' 
                    qui: bysort pid: gen double `random_mu' =
                    qui: by pid: egen `mu'=mean(`random_mu')
                    qui replace `mu'=`mu'*sqrt($T-1)
                    /*The above three lines is to simulate a normal mu with
mean 2 and std dev 0.5, for each pid*/
                    replace `g1'=`g1'+normalden(`xb'+`mu')/`lj' if `y'==1
                    replace `g1'=`g1'-normalden(`xb'+`mu')/`lj' if `y'==0
                    mlsum `lnf'=ln(`lj'/${S})
                    if (`todo'==0| `lnf'>=.) exit
                    tempname d1
                    mlvecsum `lnf' `d1' =`g1'/$S
                    matrix `g'=(`d1')

If anyone has written codes of any kind of simulated MLE using d1 method
and doesn't mind to share with me, I would be greatly appreciated! 

Please help, thanks,

Sisi Zhang

*   For searches and help try:

© Copyright 1996–2016 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index