Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: Endless iteration because of too many cases?


From   Lucas <[email protected]>
To   [email protected]
Subject   st: Endless iteration because of too many cases?
Date   Sun, 28 Oct 2012 22:18:28 -0700

Is it possible that with 3.8 million cases the EM-algorithm (for
xtmixed) cannot converge?  And, if so, what is an appropriate
adjustment of the tolerance criterion that does not overly relax the
criterion such that a maximum is not reached?

The gradient methods take so long per iteration that it is not
feasible to use them, so I switched to the EM-algorithm because I do
not need standard errors on the elements of variance-covariance
matrix.  Alas, I have noticed that the log-likelihood statistic will
show -1234567.8 for 65-70 iterations or so, then switch to -1234567.7,
and so on.  The default tolerance is 1e-10, i.e., .0000000001 (if my
counting of zeros is correct).

Two factors make me wonder whether my large-N sample prevents
convergence.  First, with 3.8 million cases, is it just numerically
impossible to have a change in the statistic as small as .0000000001?
That is, is it possible that any small adjustment per case will
necessarily cumulate to an overall adjustment larger than .0000000001,
rendering convergence impossible by this criterion?

Second, even if the answer is that convergence is possible, is the
program precise enough to accurately calculate the statistic with so
many digits such that the .0000000001 difference will register?  That
is, if my statistic already has 7 digits to the left of the decimal,
is there enough precision left over to the right of the decimal to
pick up a .0000000001 difference?

So, my first hope is to answer these questions.

IF the answers indicate convergence is impossible with this
.0000000001 criterion, then I will want to know any suggestions for
how I can calculate what a more fair/appropriate stopping tolerance
would be when I have 3.8 million cases.

One final request.  I appreciate all suggestions, but it will be a
digression to pursue questions about the model specification.  Before
writing to statalist I did a lot of checking, so I have done a lot of
work already to assure that the model is identified and otherwise
well-specified.  Given my well-specified model, is convergence
impossible?

Thanks a bunch for any insight anyone can supply.

Sam
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index