[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

From |
[email protected] (Roberto G. Gutierrez, StataCorp) |

To |
[email protected] |

Subject |
Re: st: After non-convergence with xtmixed |

Date |
Wed, 02 Jan 2008 16:29:09 -0600 |

In response to Maarten Buis <[email protected]>, Clyde Schechter follows up on his original question: > I wasn't actually thinking about using the results of an uncompleted attempt > at estimation as a final result to report in a paper. It was precisely for > the purpose of diagnosing what is wrong with the model that I thought the > results corresponding to the final state of the gradient-based estimation > might be more helpful than the final EM-iteration results. > In particular, in addition to the kinds of modeling problems Maarten pointed > out, xtmixed can fail to converge because of a boundary problem when one of > the random effects being estimated is close to zero. In some of my models > this might be the case. But the final EM results may be fairly far from the > correct values and could fail to display this problem, could they not? > Of course, it is simple enough to re-estimate the model leaving out the > suspected offending random effect. But the fact that the reduced model > converges isn't really evidence that the omitted component is close to zero, > is it? So I'm left not really knowing if the reduced model is adequate. > That's why I thought that seeing the estimates based on the incomplete > gradient-estimation would be more helpful, because typically the log > likelihood ratio is much bigger and I would imagine that the corresponding > random effect estimate would be a better way to judge if I'm up against a > boundary problem. While it is true that in well-conditioned problems Newton-Raphson iterations converge faster to the optimum, in cases where you have problem likelihoods resulting in many "non-concave" or otherwise unproductive iterations the benefits of Newton-Raphson pretty much go out the window. When this occurs, a Newton-Raphson (NR) iteration is not necessarily any better than one using EM and, in fact, since NR depends on an inverse Hessian for stepping it may be even worse if the Hessian is near singular. The fact that you regularly get a greater log-likelhood with NR may be because NR benefits from using the terminal EM iteration as a starting point. By default, -xtmixed- EM iterations stop at 20 even if convergence has not been achieved, but you can increase the number of EM iterations with option -emiterate()-. Since EM iterations are much faster, using more of them comes at little computational cost, with the benefit that you can achieve a higher log-likelihood by iterating more. Then you can have estimates that better help you diagnose the problem with the model. --Bobby [email protected] * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

- Prev by Date:
**Re: st: After non-convergence with xtmixed** - Next by Date:
**st: How to implement Incomplete Beta Function in Stata?** - Previous by thread:
**Re: st: After non-convergence with xtmixed** - Next by thread:
**Re: Re: st: After non-convergence with xtmixed** - Index(es):

© Copyright 1996–2024 StataCorp LLC | Terms of use | Privacy | Contact us | What's new | Site index |