Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: Memory


From   "Victor M. Zammit" <[email protected]>
To   <[email protected]>
Subject   Re: st: Re: Memory
Date   Thu, 13 Nov 2008 18:21:35 +0100

t-table.I understand that you need a number of random tries ,much bigger
than that of forty thousand,to come to convergence.So  I was wondering if
that that constraint with memory ,could be handled.


----- Original Message ----- 
From: "Austin Nichols" <[email protected]>
To: <[email protected]>
Sent: Thursday, November 13, 2008 4:09 PM
Subject: Re: st: Re: Memory


> Victor M. Zammit:
> I also don't understand the point of this exercise--but you should
> read -help simul- and try e.g.
>
> prog makez, rclass
>  syntax [, obs(integer 31)]
>  clear
>  set obs `obs'
>  gen a = invnorm(uniform())
>  ttest a=0
>  return scalar t=r(t)
> end
> simul, reps(100000) seed(123): makez
>
>
> On Thu, Nov 13, 2008 at 9:56 AM, Nick Cox <[email protected]> wrote:
> > I still don't understand what you are trying to do. But I can comment on
> > your code.
> >
> > You are looping round 40,000 times and writing a single result to 40,000
> > data files. Then you are looping round to put all those 40,000 data
> > files in one.
> >
> > I'd do that directly this way using just one extra file:
> >
> > clear
> > set obs 31
> > gen a = .
> > tempname out
> > postfile `out' t using myresults.dta
> > qui forval i = 1/40000 {
> >        replace a = invnorm(uniform())
> >        ttest a = 0
> >        post `out' (r(t))
> > }
> > postclose `out'
> >
> > I still doubt 40,000 is anywhere big enough to get an answer.
> >
> > Nick
> > [email protected]
> >
> > Victor M. Zammit
> >
> > * a} The data that I have is from generating random samples of whatever
> > size,in this case of size 31,from a normally distributed,infinitely
> > large,
> > population; ie
> >
> > local i = 1
> >
> > while `i'<= 40000 {
> >
> > drop _all
> >
> > set obs 31
> >
> > gen a = invnorm(uniform())
> >
> > qui ttest a = 0
> >
> > replace a = r(t) in 1
> >
> > keep in 1
> >
> > save a`i',replace
> >
> > local i = `i'+1
> >
> > }
> >
> > * I use 40000 due to memory constraint.Appending the a[i]'s together
> > gives
> > me a variable of 40000 observations ,ie
> >
> > use a1,clear
> >
> > local i = 2
> >
> > while `i'<= 40000 {
> >
> > append using a`i'.dta
> >
> > local i = `i'+1
> >
> > }
> >
> > save ais40000,replace
> >
> > * b) From ais40000.dta I get the density <= 1.31, presumably to get the
> > density of 90% , <= 1.697 to get the density of 95% etc etc,according to
> > the
> > official ttable, ie
> >
> > capture program drop density
> >
> > program define density
> >
> > use ais40000,clear
> >
> > count if a<= `1'
> >
> > di " density >=" "`1'" " = " r(N)/40000
> >
> > end
> >
> > density 1.31
> >
> > density 1.697
> >
> > density 2.042
> >
> > density 2.457
> >
> > density 2.75
> >
> > * For smaller degrees of freedom,the discrepancy is much higher.I would
> > like
> > to know how if it is at all possible to resolve memory constraint .
> >
> >
> > *
> > *   For searches and help try:
> > *   http://www.stata.com/help.cgi?search
> > *   http://www.stata.com/support/statalist/faq
> > *   http://www.ats.ucla.edu/stat/stata/
> >
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index