 Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.

# Re: st: error with optimize(), d1-d2 evaluators and bhhh technique

 From jpitblado@stata.com (Jeff Pitblado, StataCorp LP) To statalist@hsphsun2.harvard.edu Subject Re: st: error with optimize(), d1-d2 evaluators and bhhh technique Date Thu, 05 Dec 2013 10:49:59 -0600

```Christophe Kolodziejczyk <ck.statalist@gmail.com> is trying to use
-technique(bhhh)- with evaluator type d1:

> I've tried to estimate a linear regression model by MLE. I have used
> Mata and the optimize() routine. Furthermore I have tried to use the
> BHHH algorithm, but the program failed, as I got the following error
> message
>
> type d1 evaluators are not allowed with technique bhhh
> r(111);
>
> I do not understand, why I  got this error message. The documentation
> mentions that you cannot use BHHH with a d0, since the hessian is
> based on the gradient of the likelihood function, which has to be
> computed analytically. Any idea what I am doing wrong or what i am
> missing?
>
> I have included the code below. The program works with technique nr
> (but strangely not with BFGS, since it does not find an optimum).

The BHHH technique needs the observation level contributions to the gradient
matrix, thus cannot work with the any of the d0, d1, or d2 evaluator types.
Christophe will need to change the evaluator to conform to a gf0, gf1, or
gf2 evalautor type.  Here is what I did to Christophe's code to make that
happen:

***** BEGIN:
clear all
set matastrict on

mata:

nobs=100000
a=.25
b=0.5
one=J(nobs,1,1)

rseed(10101)
x=invnormal(uniform(nobs,1))
v=invnormal(uniform(nobs,1))
y=a:+x*b+v

z=one,x

void mylnf(
real	scalar		todo,
real	rowvector	p,
real	matrix		x,
real	colvector	y,
real	colvector	lnf,
real	matrix		g,
real	matrix		H)
{
real	scalar	nobs
real	scalar	k
real	vector	beta
real	scalar	lsigma
real	scalar	sigma
real	vector	u

nobs	= rows(x)
k	= cols(x)
beta	= p[.,(1..k)]'
lsigma	= p[1,k+1]
sigma	= exp(lsigma)
u	= (y-x*beta)/sigma

lnf	= -lsigma:-0.5*u:^2

if (todo) {
g		= J(nobs,k+1,0)
g[.,1..k]	= u:*x/sigma
g[.,k+1]	= -1 :+ u:*u
if (todo > 1) {
H[1..k,1..k]	= -cross(x,x)/(sigma^2)
H[k+1,k+1]	= -2*cross(u,u)
H[k+1,1..k]	= cross(u,x)
_makesymmetric(H)
}
}
}

S=optimize_init()
optimize_init_evaluator(S, &mylnf())
optimize_init_evaluatortype(S, "gf1")
optimize_init_params(S, runiform(1,3))
optimize_init_argument(S,1,z)
optimize_init_argument(S,2,y)
optimize_init_technique(S,"bhhh")
p=optimize(S)
optimize_result_params(S)
optimize_result_V(S)
end
***** END:

--Jeff