Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: RE: RE: joinby command and memory issues


From   Austin Nichols <austinnichols@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: RE: RE: joinby command and memory issues
Date   Fri, 8 Oct 2010 17:41:03 -0400

Thomas <Thomas.Weichle@va.gov> :

Why are you asking for a merge variable when you are keeping only matching obs?
If my proposal about making both datasets as small as they can be to
merge does not solve your problem,
you can also save the larger file in a few chunks and -joinby- that to
the smaller file,
then -append- the results together.

It would all be a lot easier if you moved to a computer that could
access more memory, of course.

On Fri, Oct 8, 2010 at 5:32 PM, Weichle, Thomas <Thomas.Weichle@va.gov> wrote:
> Does this demonstrate that using this method is limited by my system?
>
> The max memory appears to be right around 1050m.  I read in the original
> datasets, drop unnecessary variables, compress the data, and then save
> them.  After that, I perform the joinby and still see the error code.
>

> . joinby study_id using "G:\ESA_Cancer\ESA_DATA\ESA_USE\hgb0209.dta",
> unmatched(none) _merge(_merge
>> )
> no room to add more variables because of width
>    An attempt was made to add a variable that would have increased the
> memory required to store an

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index