Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
Michael Ochsner <mixi@sunrise.ch> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
st: xtmixed backed up because of big sample size? |

Date |
Thu, 2 Feb 2012 23:21:10 +0100 |

Hi all, I use a quite large data set comprising 103'491 Persons clustered in 75 countries with 417 to 3025 observations per country (with rho=0.18 of the base model "xtreg depvar, mle"). Using -xtreg, mle- to fit a random intercept model works fine. However, I use survey data and want to specify a probability weight, which -xtreg- does not support. When I use exactly the same model with -xtmixed- I have the problem of only 2 iterations of which the second has the same likelihood as the first accompanied by the comment "(backed up)". This happens with or without probability weights. It even happens for the base model "xtmixed depvar || country:" (it does not matter which depvar I use, I always get the backed-up comment). If I specify random slopes, sometimes the problem is solved, sometimes the backed-up comment appears depending on the variable. A colleague told me that sometimes huge data sets can lead to convergence problems when using maximum likelihood, because the first iteration produces prefect estimates already. So I tried the same model with a random sample of my data set (+/- 100 observations per country). The problem was solved. This supports his argument. Two questions remain: 1) can I do something to let he -xtmixed- command converge other than reduce the sample considerably? Is my specification wrong? 2) Would an interpretation of the backed-up solution be fine? (The unweighted backed-up -xtmixed- solution reveals exactly the same estimates as the converged -xtreg, mle- solution, so in this case it would do no harm to interpret those estimates). If the backed-up comment would indeed show up because the first iteration was already the perfect one, it should be save. However, it could well be another reason for the comment. Thanks for any help, and sorry for the long post. Michael * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

- Prev by Date:
**RE: st: RE: Use of data originating from SPSS** - Next by Date:
**st: st: generating lag variable in a Panel Dataset** - Previous by thread:
**st: MI reshape** - Next by thread:
**st: st: generating lag variable in a Panel Dataset** - Index(es):