Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Forshee, Richard" <Richard.Forshee@fda.hhs.gov> |

To |
<statalist@hsphsun2.harvard.edu> |

Subject |
RE: AW: st: RE: AW: Re: |

Date |
Wed, 24 Feb 2010 14:54:53 -0500 |

Andi, Rather than "learning it the hard way," you might consider taking some of the Stata NetCourses. I've taken several, and they are a great way to learn Stata. Good luck, Rich Richard A. Forshee, Ph.D. Senior Risk Assessment Expert Office of Biostatistics & Epidemiology FDA - Center for Biologics Evaluation and Research -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Martin Weiss Sent: Wednesday, February 24, 2010 8:21 AM To: statalist@hsphsun2.harvard.edu Subject: AW: AW: st: RE: AW: Re: <> " Would there also have been a possibility for processing such a file, for example by assigning more memory to Stata?" Depends on your machine, of course. But 250,000 should be within reach. Type -des, short-, and Stata gives you feedback on how much of its memory you are currently using. HTH Martin -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von aschaich@smail.uni-koeln.de Gesendet: Mittwoch, 24. Februar 2010 14:12 An: statalist@hsphsun2.harvard.edu Betreff: Re: AW: st: RE: AW: Re: Another reason for splitting up the dataset is that it´s simply huge - it has got about 250,000 observations, whereas the sub-datasets have only about 3,000. Would there also have been a possibility for processing such a file, for example by assigning more memory to Stata? I suppose that I have to learn all that the hard way... And thanks for the other hints, I´ve implemented a double loop now and thus simplified my do-file considerably! Best, Andi Zitat von Martin Weiss <martin.weiss1@gmx.de>: > > <> > > > " The reason why I have so many different datasets is that I read somewhere > that > Stata cannot filter observations properly, i.e. without deleting the > remaining > dataset." > > Not so, as you correctly guessed. Use the -if- qualifier! > > > " With regard to the centile(5 50 95) command, it has only been a bit of > laziness > that I didn´t try to omit the 5 and 50-centiles - as soon as it worked, I > was > happy :-)" > > Omit 5 and 50, and then use - post internal (r(lb_1)) (r(c_1)) (r(ub_1))- > for the -postfile- > > > > HTH > Martin > > > -----Ursprüngliche Nachricht----- > Von: owner-statalist@hsphsun2.harvard.edu > [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von > aschaich@smail.uni-koeln.de > Gesendet: Mittwoch, 24. Februar 2010 01:07 > An: statalist@hsphsun2.harvard.edu > Betreff: RE: st: RE: AW: Re: > > As I told in my first post, I´m an absolute Stata rookie. > The reason why I have so many different datasets is that I read somewhere > that > Stata cannot filter observations properly, i.e. without deleting the > remaining > dataset. That´s probably nonsense... So in order to deduce conditional > statements, I split the original (huge) dataset into numerous sub-datasets. > > With regard to the centile(5 50 95) command, it has only been a bit of > laziness > that I didn´t try to omit the 5 and 50-centiles - as soon as it worked, I > was > happy :-) > > And by the way: Thanks a lot for the hint with the loop - I´m gonna try it > right > away! Here too, someone told me that looping within a filename wasn´t > possible... > > > Best, > Andi > > > Zitat von Martin Weiss <martin.weiss1@gmx.de>: > > > > > <> > > > > It is good to hear you are satisfied with the solution, but I am not quite > > sure what your data structure looks like. You request three centiles from > > Stata with your call -centile PF_norm, centile(5 50 95)-, but you store > only > > one of them, the 95% one, via -postfile-. This is your prerogative, but it > > seems inefficient to me. What are the 5% and 50% quantile requests good > for > > in your call? > > > > > > You said in an earlier post that you have to repeat this process > frequently, > > so you may want to know that you can employ a loop to call your datasets: > > > > > > ******* > > forv i=1/2{ > > use "C:\...\ALL_complete_1.`i'.dta" > > //... your other commands > > } > > ******* > > > > The endpoint for the index here is 2, you can enter the appropriate number > > yourself. > > > > Does each of those datasets really only contain one interesting variable? > > Why are they dispersed across many datasets (just being curious)? > > > > > > HTH > > Martin > > > > -----Original Message----- > > From: owner-statalist@hsphsun2.harvard.edu > > [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of > > aschaich@smail.uni-koeln.de > > Sent: Dienstag, 23. Februar 2010 19:03 > > To: statalist@hsphsun2.harvard.edu > > Subject: Re: st: RE: AW: Re: > > > > Hey guys, > > > > > > sorry for having bothered you again, I found the solution in the meantime! > > In case someone is interested, here´s the syntax: > > > > > > log using test, replace > > use "C:\...\ALL_complete_1.1.dta" > > centile PF_norm, centile(5 50 95) > > postfile internal lower_ci95 centile_95 upper_ci95 using "filename", > replace > > post internal (r(lb_3)) (r(c_3)) (r(ub_3)) > > clear > > use "C:\...\ALL_complete_1.2.dta" > > centile PF_norm, centile(5 50 95) > > post internal (r(lb_3)) (r(c_3)) (r(ub_3)) > > postclose internal > > clear > > log close > > > > > > Still thanks for the help you gave me so far! > > Andreas > > * > > * For searches and help try: > > * http://www.stata.com/help.cgi?search > > * http://www.stata.com/support/statalist/faq > > * http://www.ats.ucla.edu/stat/stata/ > > > > > > * > > * For searches and help try: > > * http://www.stata.com/help.cgi?search > > * http://www.stata.com/support/statalist/faq > > * http://www.ats.ucla.edu/stat/stata/ > > > > > > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ > > > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ > * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**[no subject]***From:*aschaich@smail.uni-koeln.de

**st: Re:***From:*"Karen Wright" <Karen.Wright@icr.ac.uk>

**st: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**st: RE: AW: Re:***From:*"Abrams, Judith" <abramsj@karmanos.org>

**Re: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**Re: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**RE: st: RE: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**RE: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**AW: st: RE: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

**Re: AW: st: RE: AW: Re:***From:*aschaich@smail.uni-koeln.de

**AW: AW: st: RE: AW: Re:***From:*"Martin Weiss" <martin.weiss1@gmx.de>

- Prev by Date:
**st: Can Stata underline?** - Next by Date:
**Re: st: problem when using shp2dta with dbf file** - Previous by thread:
**AW: AW: st: RE: AW: Re:** - Next by thread:
**Re: AW: st: RE: AW: Re:** - Index(es):