Stata 15 help for ml_hold

Title

[P] ml hold -- Using ml recursively

Syntax

ml hold [, noisily ]

ml unhold [, noisily ]

Description

ml hold and ml unhold are commands for ml programmers. They provide the tools required to call ml model recursively.

ml hold renames all global macros, scalars, matrices, and variables created by the ml model command. This allows you to use ml to optimize one likelihood during the optimization of another likelihood.

ml unhold restores all global macros, scalars, matrices, and variables renamed by ml hold. This restores Stata to the state defined by the previous ml model command. Thus you can continue to optimize the previous likelihood, possibly using the results of the currently finished ml optimization.

Option

noisily causes both ml hold and ml unhold to display messages related to each global macro, scalar, matrix, and variable created/modified by the ml model command. This option is available for debugging purposes.

Remarks

To illustrate, we will fit the negative binomial distribution using the method of profile likelihoods instead of full maximum likelihood estimation. Thus we will optimize the beta coefficients while treating the alpha parameter as a nuisance parameter.

Treating alpha as a nuisance parameter (actually we will be optimizing ln(alpha)), let's assume we have a candidate value so that our conditional likelihood evaluator is

program mynbreg_lf version 15.1 args lnf xb

tempvar m p

local y $ML_y1 local lnalpha $MY_lnalpha gen double `m' = exp(-$MY_lnalpha)

quietly replace `lnf' = lngamma(`m'+`y') /// - lngamma(`y'+1) /// - lngamma(`m') /// - `m'*ln(1+exp(`xb'+`lnalpha')) /// - `y'*ln(1+exp(-`xb'-`lnalpha')) end

Here we are assuming the value of ln(alpha) is saved in the global macro MY_lnalpha (or $MY_lnalpha could contain the name of a variable or scalar), everything else in mynbreg_lf is standard to ml programming.

Now if we have a candidate for the value of alpha, we could interactively estimate the beta coefficients (conditionally on this candidate alpha) by typing:

. use ... . global MY_lnalpha = ... . ml model lf mynbreg_lf (beta: yvar = xvars) , options . ml max , options This is essentially what the likelihood evaluator for alpha will do. The only detail we must remember is to ml hold the current ml model environment before we optimize the conditional likelihood. The likelihood for optimizing ln(alpha) is

program mynbreg_alpha_d0 version 15.1 args todo b lnf tempvar lnalpha mleval `lnalpha' = `b', eq(1) local y $ML_y1 global MY_lnalpha `lnalpha'

ml hold quietly ml model lf mynbreg_lf /// (xb: `y' = $MY_x, $MY_offset), /// maximize ml unhold

scalar `lnf' = e(ll) end

Notice that we use global macros to pass the names of the predictors and the offset option associated with the beta coefficients. Thus we can now fit the negative binomial model using the method of profile likelihoods by calling ml model twice: first to find the maximum likelihood estimate for alpha, and then to estimate the beta coefficients conditionally on the MLE of alpha.

. global MY_x ... // x vars . global MY_offset offset(...) // offset option . ml model lf mynbreg_alpha_d0 (y =), maximize ... . tempname lnalpha . matrix `lnalpha' = e(b) . scalar `lnalpha' = `lnalpha'[1,1] . global MY_lnalpha `lnalpha' . ml model lf mynbreg_lf (y = $MY_x, $MY_offset), maximize ... . ml display

After testing your likelihood evaluators, you could then easily create a new estimation command as described in Maximum Likelihood Estimation with Stata, 3rd Edition (Gould, Pitblado, and Sribney 2006). For example,

program mynbreg version 15.1 if replay() { if (`"`e(cmd)'"' != "mynbreg") error 301 Display `0' } else { Estimate `0' } end

program Estimate, eclass syntax varlist [, /// offset(passthru) /// exposure(passthru) /// noLOg /// * /// ]

if `"`offset'"' != "" & `"`exposure'"' != "" { di as err /// "options offset() and exposure() may not be combined" exit 198 } mlopts mlopts diopts , `options' if "`log'" != "" { local star "*" }

tempname lnalpha

gettoken y xvars : varlist

macro drop MY_* global MY_x `xvars' global MY_offset `offset' `exposure'

`star' di `star' di as txt "Fitting profile likelihood:" ml model d0 mynbreg_alpha_d0 (`y' : `y' = ) , /// `mlopts' `log' maximize

macro drop MY_* matrix `lnalpha' = e(b) scalar `lnalpha' = `lnalpha'[1,1] global MY_lnalpha `lnalpha'

`star' di `star' di as txt "Fitting full likelihood:" ml model lf mynbreg_lf /// ( /// `y' : `y' = `xvars', /// `offset' `exposure' /// ) , `mlopts' `log' maximize macro drop MY_*

ereturn local title "Negative binomial via profile likelihood" ereturn scalar lnalpha = `lnalpha' ereturn scalar alpha = exp(`lnalpha') ereturn local cmd mynbreg

Display , `diopts' end

program Display ml display end

One final note on this example. Statistically speaking, the covariance matrix values (of the beta coefficients) obtained from this method (profile likelihood) are not full information maximum likelihood variance estimates, rather they are conditional on the observed MLE of alpha.

Acknowledgment

Mead Over, The World Bank, provided helpful suggestions for the implementation of ml hold and ml unhold.


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index