Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: RE: -ML- vs. -ARCH-


From   Martien Lamers <Martien.Lamers@UGent.be>
To   "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject   st: RE: -ML- vs. -ARCH-
Date   Wed, 18 Aug 2010 15:14:21 +0200

Dear Statalist, 

I have been able to solve the below problem, indeed having to do with the [_n-1] giving 246 values of `lnf' and 247 of the parameters to be estimated. I have been able to estimate GARCH(1,1) models using the code below. However, it seems that I cannot achieve the same convergence as the -arch- command does. 

****************************************************************************************
program drop _all

set more off
sysuse sp500, clear
gen return=change/open * 100
drop if _n==1
gen t=_n
tsset t

* Own maximum likelihood program * 
program garchtry
	args lnf mu omega alpha1 beta
	tempvar err ext h
	qui gen double `err'=$ML_y1-`mu'
	qui gen double `ext'=`err'[_n-1]
	qui gen double `h'=`omega'/(1-`alpha1'-`beta')
	qui replace `h'=`omega'+`alpha1'*`h'[_n-1]+`beta'*`ext'^2 if _n>1
	qui replace `lnf'=lnnormalden($ML_y1,`mu',`h')
end

ml model lf garchtry (mu: return=) /omega /alpha1 /beta
ml init /omega=0.1 /alpha1=0.8 /beta=0.05
ml search
ml max
****************************************************************************************

initial:       log likelihood =   -418.816
rescale:       log likelihood =   -418.816
rescale eq:    log likelihood =   -418.816
Iteration 0:   log likelihood =   -418.816  
Iteration 1:   log likelihood = -417.65022  
Iteration 2:   log likelihood = -417.20853  
Iteration 3:   log likelihood = -417.19897  
Iteration 4:   log likelihood = -417.19896  

                                                  Number of obs   =        247
                                                  Wald chi2(0)    =          .
Log likelihood = -417.19896                       Prob > chi2     =          .

------------------------------------------------------------------------------
      return |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
mu           |
       _cons |  -.0068106   .0800172    -0.09   0.932    -.1636414    .1500202
-------------+----------------------------------------------------------------
omega        |
       _cons |   .2086953    .092274     2.26   0.024     .0278416    .3895489
-------------+----------------------------------------------------------------
alpha1       |
       _cons |   .7939897   .0833277     9.53   0.000     .6306705    .9573089
-------------+----------------------------------------------------------------
beta         |
       _cons |   .0367916   .0168409     2.18   0.029     .0037841    .0697991
------------------------------------------------------------------------------


However, when I use the arch command:

arch return, arch(1) garch(1)

(setting optimization to BHHH)
Iteration 0:   log likelihood = -423.89884  
Iteration 1:   log likelihood = -422.63041  
Iteration 2:   log likelihood = -419.98124  
Iteration 3:   log likelihood = -418.07679  
Iteration 4:   log likelihood = -415.93058  
(switching optimization to BFGS)
Iteration 5:   log likelihood = -415.55547  
Iteration 6:   log likelihood = -415.54284  
Iteration 7:   log likelihood = -415.54221  
Iteration 8:   log likelihood = -415.54221  

ARCH family regression

Sample: 1 - 247                                    Number of obs   =       247
Distribution: Gaussian                             Wald chi2(.)    =         .
Log likelihood = -415.5422                         Prob > chi2     =         .

------------------------------------------------------------------------------
             |                 OPG
      return |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
return       |
       _cons |  -.0112784   .0804964    -0.14   0.889    -.1690485    .1464918
-------------+----------------------------------------------------------------
ARCH         |
        arch |
         L1. |   .1101663   .0446167     2.47   0.014     .0227192    .1976133
             |
       garch |
         L1. |   .7915719   .0874705     9.05   0.000     .6201329    .9630109
             |
       _cons |   .1637969   .1041167     1.57   0.116    -.0402682    .3678619
------------------------------------------------------------------------------

The likelihood converges at a different point (-417.19 vs. -415.54) and the parameters are thus quite different in size and also in standard error and significance. When I use the values from -arch- as initial values the likelihood is actually -431.11: 

****************************************************************************************
ml model lf garchtry (mu: return=) /omega /alpha1 /beta
ml init mu:_cons=-.0112784 /omega=.1637969 /alpha1=.7915719 /beta=.1101663
ml report
****************************************************************************************

Current coefficient vector:
           mu:     omega:    alpha1:      beta:
        _cons      _cons      _cons      _cons
r1  -.0112784   .1637969   .7915719   .1101663

Value of log likelihood function = -431.10749


I calculated this manually and the value -431.11 is correct. Both programs use a Gaussian distribution, -arch- uses technique(bhhh 5 bfgs 10) but adding that option to the -ml model- statement does not help as it still converges at -417.19. (using technique(nr) in -arch- still makes it converge to -415.54) I also did not find any constraints that -arch- uses or a difference in tolerance levels. 

My question is which can I trust? I do not know the inner workings of -arch- (although I re-checked in Eviews and it gives the same values as -arch-) but I do not see what else I can do to end up with the same convergence. The differences in coefficient are not huge, but this example already shows that some coefficients are significant under my own program and not in -arch-. When I go into more difficult (not pre-programmed) likelihood models, can I trust the outcomes?

Thanks.

Martien Lamers
Department of Financial Economics
Ghent University



-----Oorspronkelijk bericht-----
Van: Martien Lamers 
Verzonden: maandag 16 augustus 2010 12:12
Aan: statalist@hsphsun2.harvard.edu
Onderwerp: programming own arch ml

Dear Statalist, I am currently trying to program my own ARCH/GARCH models using Stata's ml command, since I expect to write more complicated maximum likelihood programs in the future. So far, my attempts have not been very successful. I have checked the help files of Stata, Statalist archives and multiple -ml- threads on problems like mine, have gone through the book of Gould et al. on programming maximum likelihood in Stata (although they do not really discuss time series) and have asked people at my department to no avail. Any comments would be appreciated since I am quite new to this topic. My apologies for being a n00b. 

This is the do-file I have written:

*******************************************
program drop _all

set more off
sysuse sp500, clear
gen return=(change/open) * 100
gen t=_n
tsset t

* Stata's ARCH command *
arch return volume, arch(1)

* Own maximum likelihood program * 
program archtry
                args lnf mu a0 a1 
                tempvar err ext h
                qui gen double `err'=$ML_y1-`mu'
                qui gen double `ext'=`err'[_n-1]
                qui gen double `h'=`a0'+`a1'*`ext'^2
                qui replace `lnf'=-0.5*(ln(`h') + (`err')^2/`h')
end

ml model lf archtry (return = volume) /a0 /a1
ml max
********************************************

I used the arch command to see the results I want to end up with. 

As far as I can tell, the program is in the correct syntax. Using -ml check- shows that I have passed the tests, however it seems as though the intial values are not feasible:

The initial values are not feasible.  This may be because the initial values have been chosen poorly or because there is an error in archtry and it always returns missing no matter what the parameter values.
Stata is going to search for a feasible set of initial values. If archtry is broken, this will not work and you will have to press Break to stop the search.

Searching...
initial:       log likelihood =     -<inf>  (could not be evaluated)
searching for feasible values ...........................................................................................
> .......................................................................................................................
> .......................................................................................................................
> .......................................................................................................................
> .......................................................................................................................
> .......................................................................................................................
> .......................................................................................................................
> .......................................................................................................................
> ............................................................................

could not find feasible values
r(491);

Using -ml search- gives the same error that it cannot find feasible values. Using -ml init /a0=0.05 /a1=0.01- again gives no feasible values. Using -reg return volume- and extracting the vector of parameters and using them as initial values again gives the same problem.

I thought the program could not handle the [_n-1] operator but it seems that for every try to search feasible values it does generate an `h' and an `lnf'. So now I am unsure, whether the fault is in the program or in the way I try and maximize the likelihood. 

Any comments would be of use. Thank you in advance.

Martien Lamers
Department of Financial Economics
Ghent University

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index