Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: Bootstrapping to get Standard Errors for Regression Discontinuity Estimators


From   Austin Nichols <[email protected]>
To   [email protected]
Subject   Re: st: Re: Bootstrapping to get Standard Errors for Regression Discontinuity Estimators
Date   Thu, 23 Sep 2010 14:20:40 -0400

Jen Zhen:

One drawback of RD is the same as a common drawback in RCTs, that you
simply do not have the sample size to precisely measure an effect.
Remember, in RD, you are estimating the discontinuity in the
expectation of y in a narrow window around a cutoff, so even if your
sample size is a million, the sample size in a narrow window around
the cutoff might be 100.  You can try increasing the bandwidth, which
in the limit becomes a simple linear regression on a treatment dummy,
the assignment variable, and their interaction.  Assessing how the
estimate depends on the bandwidth is a crucial step in any RD
analysis. Ideally, the estimate is not too sensitive to bandwidth,
since there is an inherent bias/variance tradeoff that cannot be
uniquely solved.  See also http://ftp.iza.org/dp3995.pdf for recent
advances in this area.

On Thu, Sep 23, 2010 at 7:21 AM, nshephard <[email protected]> wrote:
>
> Jen Zhen wrote:
>>
>> Dear listers,
>>
>> When bootstrapping Austin Nichol's rd command:
>>
>> bs, reps(100): rd outcome assignment, mbw(100) ,
>>
>> I find that often the resulting P value tells me the estimate is not
>> statistically significant at the conventional levels, even when visual
>> inspection and more basic methods like simple OLS regressions on a
>> treatment dummy, assignment and assignment squared suggest huge
>> statistical significance.
>>
>> That makes me wonder whether possibly this boot-strapping method might
>> somehow understate the true statistical significance of the effect in
>> question? Or can and should I fully trust these results and conclude
>> that the estimate is not statistically significant at the conventional
>> levels?"
>
> What do you mean by "conventional levels [of significance]"?
>
> You should set your threshold for declaring statistical significance in the
> context of your study.  Using p < 0.05 to declare something statistically
> significant is often inappropriate.
>
> Often of greater interest is an estimate of the effect size (and associated
> CI's), what do these tell you?
>
> see e.g. Gardner & Altman (1986)
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1339793/pdf/bmjcred00225-0036.pdf
>
>
> Try more replications for your bootstrapping too, 100 isn't that many
> really, try at least 1000.
>
> Neil

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index