Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: do variables not used in a process take up memory while a process runs?

From   Doug Hess <>
Subject   st: do variables not used in a process take up memory while a process runs?
Date   Wed, 4 May 2011 11:03:46 -0400

I'm running models with 30 predictors on 150,00 records using
xtmelogit to examine random intercepts. As you can imagine this takes
a long time to run. I started a model last Thursday night (US east
coast)  and it didn't produce results until Queen Elizabeth II stepped
into Westminster Abbey Friday morning (perhaps propitious for my
research depending on your belief in the divine right of monarchs, but
it was nine hours after Stata started the process).

So, I'm looking for any tricks to speed things up (using Stata 11/IC
on a Windows 7 PC with 2.66 ghz Intel and 2.96gb RAM usable). I tried
the Laplacian option but it didn't seem to speed things up and I'm not
sure if the estimates are considered reliable, so to speak, if you use
that option.

One question: I first -keep- only the variables in the model, does
this speed things up? I.e., is Stata turning around in its head all
the data in the database, or just those that are in the model as it
runs the process?

Second question: if I use a USB memory stick to "readyboost" the
memory, does this help speed Stata up for such processes?

I'm open to other thoughts. Or I am I better off dumping the results
into other specialized software for hierarchical modeling? (No offense
to Stata.)

Thank you.

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index