Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: RE: Backed Up Iteration


From   Nick Cox <[email protected]>
To   [email protected]
Subject   Re: st: RE: Backed Up Iteration
Date   Thu, 26 Jan 2012 09:20:16 +0000

I can't really add to what I said previously, but the diagnosis that
the model is too much of a strain for the data remains in play. The
Statalist FAQ advises that you give precise code and precise results
for precise advise and I can only repeat that there.

Nick

On Thu, Jan 26, 2012 at 2:54 AM, Anna-Leigh Stone
<[email protected]> wrote:

> I am lead to believe the model is suited for the data. I was able to
> run the exact same code with a similar dataset a month ago and
> everything ran well. It is something within this dataset. It has the
> same number of variables but 3 times the number of observations as the
> other dataset that I ran. Could that be the issue? I tried simplifying
> the model and it run when there are 3 out of 15 of the variables but
> when I add a fourth variable regardless of which variable I add it
> gets hung up again.
>
>
> On Wed, Jan 25, 2012 at 2:57 PM, Nick Cox <[email protected]> wrote:
>> Often and often. It is very difficult to tell from this kind of report whether you are trying to fit a model that is a bad idea for your data or the fitting process is just a bit tricky (or both). As you have several predictors, fitting an overcomplicated model really is a possibility, whatever the scientific (or non-scientific, e.g. economic) grounds for wanting to use them all. You could try tuning the -ml- engine by e.g. changing -technique()-. Simplifying the model first and then introducing complications gradually can sometimes isolate problematic predictors.


Anna-Leigh Stone

>> I am using Stata 12.0 and I am attempting to run a fractional probit
>> regression with the command: glm dependent independent, fa(bin)
>> link(probit) cluster(gck). I have made sure that the dependent
>> variable values are not negative. I have 1500 dependent variable
>> observations out of 69,900 that fall at 0 and 1. Regardless of whether
>> I leave in the 0 and 1 values or take them out, I get the same log
>> likelihood iteration followed by backed up. It continues like this and
>> does not converge until I break it.
>> Has anyone else had this problem and know a solution to it? I do have
>> several variables but they are all necessary to my regression. I have
>> also tried the difficult option but that does not work either.

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index