Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Imposing bounds on parameters estimated with -optimize-

From   "Lacy,Michael" <>
To   "" <>
Subject   Re: st: Imposing bounds on parameters estimated with -optimize-
Date   Sat, 17 Sep 2011 17:58:47 +0000

>On Thu, Sep 15, 2011 at 12:13 AM, Lacy,Michael wrote:
>> I'm using ?Mata's -optimize- to maximize a function. ? All items in the parameter vector must fall between 0 and 1; other values do not make sense. <snip> ?Is there some other, better way to impose bounds on parameters estimated with -optimize-"
>You can use the same tricks as in -ml-, that is, maximize with respect
>to a transformed version of the parameter. So, if you model a variance
>you can maximize with respect to ln(variance), if you model a
>probability you can maximize with respect to logit(pr), if you model a
>correlation you maximize with respect to tanh(rho) (that is the
>Fisher's z transformation of rho). In all these cases the transformed
>version can take any value while the original parameter respects the
>bounds. For more see:
>Hope this helps,

Good, thanks to you and Steve for your thoughts.  I was contemplating this direction, and having it
confirmed helps.  There is a bit of a catch here, though, namely that parameterizing w.r.t. to the
logit leads to a complicated and computationally intensive log likelihood and gradient for my
problem, *much* more so in the case of the gradient. The clumsy trap I had tried (check parameters
 at the top of the evaluator function and reset as necessary) is probably  less
damaging to the efficiency of the algorithm  so I'd like to try and compare both approaches. 

So, thinking more about the trapping-bad-parameters approach: Ideally, one would want to do this at
the point in the code at which -optimize- resets parameters for the next iteration. Since the
internals of optimize aren't accessible to users(?), one can only trap and reset parameters inside the
evaluator function.  So, -optimize- gets a changed parameter vector from the evaluator function, which
might or might not prevent -optimize-n from creating new parameter values that are out of range, which
it would then send back to the evaluator function.  This does not seem good, and puzzles me. I thought
trapping out of bounds parameters was a pretty standard feature of optimization algorithms (e.g., in
R, where one can feed an optimization routine bounds for parameters). Is it possible there is some other way 
to do this in Mata than what I have tried, which for some cases (possibly mine) might be a better approach 
than reparameterizing.


Mike Lacy
Dept. of Sociology
Colorado State University
Fort Collins CO, USA

*   For searches and help try:

© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index