Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Stata's ml stucks


From   Austin Nichols <[email protected]>
To   [email protected]
Subject   Re: st: Stata's ml stucks
Date   Wed, 28 Jan 2009 19:57:57 -0500

Stas et al.--
I think Sergiy is solving for roots of F(x)-A only, and only in the
"nice" part of the function F(x) where x>0 and F<0 and F looks
differentiable there.  Of course, we have not actually seen the
functional form...  and plenty of functions that look smooth are
nowhere differentiable.

I think Mata's optimizer is probably the best option for this problem,
if one has Stata 10 --Roger Harbord says "Mata -optimize()-  is also
designed as a maximizer (or minimizer) rather than a root-finder" but
note that Mata's optimize() can minimize (F(x)-A)^2 just fine which is
the same as finding a root of F(x)-A. But Sergiy seems to be limited
to Stata 9, so optimize() is not an option.  I have not compared the
performance of Ben Jann's -mm_root()- [findit moremata] to -ridder-
[findit ridder] and I would interested to hear others' experience...

Perhaps -moremata- needs a web page like -estout-...

On Wed, Jan 28, 2009 at 4:51 PM, Stas Kolenikov <[email protected]> wrote:
> One problem I see with your function is that it is not differentiable.
> Stata's -ml maximize- and -optimize- are really intended for nice
> smooth functions that have well separated local maxima, and when they
> have something like yours, they get crazy estimates of derivatives
> that would either throw the algorithm out all the time, or at least
> make it fail to recognize the convergence point if the convergence
> conditions involve gradients (-gtolerance- and -nrtolerance- options
> of -maximize-). I thought Russian mathematical training was good
> enough to teach those issues ;).
>
> Also, -difficult- is next to useless with 1D optimization, as far as I
> understand it, since its main job is to break down the parameter space
> into (almost flat) ridges and nicely convex (approximately) quadratic
> components, so as to not invert an ill conditioned matrix for the
> first situation.
>
> I'd say in situation like yours simulated annealing is the thing to go
> with if you do minimization. If you do root finding, you would need to
> work with a pretty rough algorithm that does not involve derivatives.
> I don't know what -ridder- relies on, but if it converges, use the
> answers and don't touch anything :))
>
> And yes, at some point I've fooled with $ML_b vector, but I would not
> recommend doing that since you don't know exactly how -ml- uses it,
> unless you reverse engineered everything it does. Given that you've
> reverse engineered some of the graphics and tabulated output, I won't
> be surprised if you did :))
>
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index