Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: RE: RE: bootstrap reject()


From   Nick Cox <[email protected]>
To   "'[email protected]'" <[email protected]>
Subject   RE: st: RE: RE: bootstrap reject()
Date   Tue, 1 Nov 2011 14:25:18 +0000

As the help tells you, the argument must be an expression, and in particular the expression should evaluate to 1 for rejection and 0 for acceptance. -if- cannot be part of an expression; it always precedes an expression when used correctly. 

Here is a dopey example:

sysuse auto
bootstrap r(mean), reject(r(mean) > 22.5) : summarize mpg

Note that my second thoughts imply that I don't think this is best practice, but as always it's your problem, or in the case your colleague's. 

Nick 
[email protected] 

Shehzad Ali

Thanks, Nick. This is very helpful. 


I wasn't sure, however, about the syntax of the -reject()- option. It does not seem to work with -if- statements only. If we want to 'reject if costdiff>1000', what would be the best way to do this using the -reject- option? 

>From: Nick Cox <[email protected]>
>
>However, if this were being talked about in a paper I was reviewing, I would recommend a two-stage approach:
>
>1. Save the bootstrap results to a new dataset. 
>
>2. Analyse the bootstrap results with and without the restriction of interest. 
>
>That way, anyone can see what difference it makes. If you just throw away results you don't want, and throw away the bin too, people can't even rummage in the bin to see what it was that you discarded. 
>
>In work I see, there is overwhelming emphasis on just using bootstrapping to get data-based confidence intervals and too little use made of the scope for bootstrapping to tell you something about the entire sampling distribution. Add all the appropriate caveats you like. 
>
>Nick 
>[email protected] 
>
>
>-----Original Message-----
>From: [email protected] [mailto:[email protected]] On Behalf Of Nick Cox
>Sent: 01 November 2011 13:33
>To: '[email protected]'
>Subject: st: RE: bootstrap reject()
>
>Looking at the help for -bootstrap- I see an option -reject()- which appears to do exactly what you ask. 
>
>I don't think we can comment on correctness. If this makes sense in context, it makes sense in context. What people you report to might say is difficult to second guess, unless this is a matter of following their instructions. 
>
>Nick 
>[email protected] 
>
>Shehzad Ali
>
>A colleague asked this question. Is there a way to specify a range (or rejection region) for the parameter of interest when using the -bootstrap- program in Stata? She is using GLM for her cost data, followed by predicting costs for the treatment and control arms, then taking the difference between mean costs in treatment and control arms. The standard error of the difference in mean costs is then estimated by bootstrapping the process. In order to execute this, she is using the following general approach:
>
>
>
>capture program drop bootstrap1
>
>glm model followed by predicting costs for treatment=1 and treatment=0. Then the difference between mean predicted costs for the two groups is saved as a scalar 'costdiff'.
>
>Subsequently, Stata's -bootstrap- is used to repeat this process a 1,000 times.
>
>
>bootstrap costdiff=r(costdiff), reps(1000) saving(bootrep1, replace) seed(1234): 
>bootstrap1
>
>However, some of the predicted cost differences are very huge (>5% of the bootstrap samples) and the SE is also huge (several times the SE in the observed sample). Is there a way, the predicted costs could be restricted within a range (for instance, the cost difference should be within a range that is close to the observed sample) or would this approach be incorrect?
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index