Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
GORLE Steven <steven.gorle@telenet.be> |

To |
statalist@hsphsun2.harvard.edu, aschaich@smail.uni-koeln.de |

Subject |
Re: AW: st: RE: AW: Re: |

Date |
Wed, 24 Feb 2010 14:27:42 +0100 |

Dear,

In Linux this is less problematic. ** eg. ******** set memory 1500m ******** ** see FAQ http://www.stata.com/support/faqs/data/setmemory.html

Kind regards, Steven Gorlé Op 24/02/2010 14:12, aschaich@smail.uni-koeln.de schreef:

Another reason for splitting up the dataset is that it´s simply huge - it has got about 250,000 observations, whereas the sub-datasets have only about 3,000. Would there also have been a possibility for processing such a file, for example by assigning more memory to Stata? I suppose that I have to learn all that the hard way... And thanks for the other hints, I´ve implemented a double loop now and thus simplified my do-file considerably! Best, Andi Zitat von Martin Weiss<martin.weiss1@gmx.de>:<> " The reason why I have so many different datasets is that I read somewhere that Stata cannot filter observations properly, i.e. without deleting the remaining dataset." Not so, as you correctly guessed. Use the -if- qualifier! " With regard to the centile(5 50 95) command, it has only been a bit of laziness that I didn´t try to omit the 5 and 50-centiles - as soon as it worked, I was happy :-)" Omit 5 and 50, and then use - post internal (r(lb_1)) (r(c_1)) (r(ub_1))- for the -postfile- HTH Martin -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von aschaich@smail.uni-koeln.de Gesendet: Mittwoch, 24. Februar 2010 01:07 An: statalist@hsphsun2.harvard.edu Betreff: RE: st: RE: AW: Re: As I told in my first post, I´m an absolute Stata rookie. The reason why I have so many different datasets is that I read somewhere that Stata cannot filter observations properly, i.e. without deleting the remaining dataset. That´s probably nonsense... So in order to deduce conditional statements, I split the original (huge) dataset into numerous sub-datasets. With regard to the centile(5 50 95) command, it has only been a bit of laziness that I didn´t try to omit the 5 and 50-centiles - as soon as it worked, I was happy :-) And by the way: Thanks a lot for the hint with the loop - I´m gonna try it right away! Here too, someone told me that looping within a filename wasn´t possible... Best, Andi Zitat von Martin Weiss<martin.weiss1@gmx.de>:<> It is good to hear you are satisfied with the solution, but I am not quite sure what your data structure looks like. You request three centiles from Stata with your call -centile PF_norm, centile(5 50 95)-, but you storeonlyone of them, the 95% one, via -postfile-. This is your prerogative, but it seems inefficient to me. What are the 5% and 50% quantile requests goodforin your call? You said in an earlier post that you have to repeat this processfrequently,so you may want to know that you can employ a loop to call your datasets: ******* forv i=1/2{ use "C:\...\ALL_complete_1.`i'.dta" //... your other commands } ******* The endpoint for the index here is 2, you can enter the appropriate number yourself. Does each of those datasets really only contain one interesting variable? Why are they dispersed across many datasets (just being curious)? HTH Martin -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of aschaich@smail.uni-koeln.de Sent: Dienstag, 23. Februar 2010 19:03 To: statalist@hsphsun2.harvard.edu Subject: Re: st: RE: AW: Re: Hey guys, sorry for having bothered you again, I found the solution in the meantime! In case someone is interested, here´s the syntax: log using test, replace use "C:\...\ALL_complete_1.1.dta" centile PF_norm, centile(5 50 95) postfile internal lower_ci95 centile_95 upper_ci95 using "filename",replacepost internal (r(lb_3)) (r(c_3)) (r(ub_3)) clear use "C:\...\ALL_complete_1.2.dta" centile PF_norm, centile(5 50 95) post internal (r(lb_3)) (r(c_3)) (r(ub_3)) postclose internal clear log close Still thanks for the help you gave me so far! Andreas * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/* * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/* * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**[no subject]***From:*aschaich@smail.uni-koeln.de

**st: Re:***From:*"Karen Wright" <Karen.Wright@icr.ac.uk>

**st: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**st: RE: AW: Re:***From:*"Abrams, Judith" <abramsj@karmanos.org>

**Re: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**Re: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**RE: st: RE: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**RE: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**AW: st: RE: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**Re: AW: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

- Prev by Date:
**Re: st: LSM smoothing** - Next by Date:
**st: Macro dissapears after forval or foreach** - Previous by thread:
**RE: AW: st: RE: AW: Re:** - Next by thread:
**st: AW: AW: Re:** - Index(es):