However, I am running into a problem with my exact model, probably
because I do not fully understand the mechanics of ML hold/unhold.
For example, is ml hold/unhold the "Stata way" of tackling a model with
an unidentified parameter such as a panel error correction model:
model 1: y_it=theta_i*(y_it-1-beta*x_it-1)+zeta_i*d.x_it
This is a restricted (with nonlinear restrictons of
gamma_i/theta_i=beta) model of
model 2: y_it=theta_i*y_it-1-gamma_i*x_it-1+zeta_i*d.x_it
Currently, the way to solve for parameters of model 1 is with interated
ML procedures, conditioning on beta, then other parameters, etc, until
convergence. It is easy enough to program with nested procedures and
while loops for convergence, etc.
However, isn't this purpose of ML hold/unhold (a real, non-rhetorical
question ;) )?
When I change the example program referenced above, I cannot get any
sort of convergence. At first I thought this was because of my panel
type model. So I changed the procedures to (using auto.dta) estimate the
(almost) equivalent of:
model 3: regress price mpg weight
using the conditional models in the example program (replaced with the
familiar `lnf'=ln(normd(......) to estimate OLS rather than negative
The problem is when I condition on anything other than constant, I
cannot get the model to estimate anything near correct results (I'm
talking order of magnitude, not significant digits). I believe the
problem is the way I am passing the `xb' estimated to the conditional
likelihood, but I really don't know at this point.
Any takers??? I'll clip and post some code if in fact the problem can be
solved elagently with ML hold/unhold.