Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: Bootstrapping to get Standard Errors for Regression Discontinuity Estimators


From   Jen Zhen <[email protected]>
To   [email protected]
Subject   Re: st: Re: Bootstrapping to get Standard Errors for Regression Discontinuity Estimators
Date   Sat, 25 Sep 2010 18:59:07 +0200

Many thanks for your replies!

Comparing the bandwidths I had chosen when running the regression just as
- reg outcome eligibility_dummy assignment assignment^2 assignment^3
if (assignment>lowerbound & assignment<upperbound) -
to those chosen by default in the rd command,
I do realize that the latter were much more narrow, so the trade-off
Austin pointed me to does indeed seem to explain the different
results.

I have to admit that I had not spent sufficient thought on that
choice, and thus the reference to Imbens' paper on a possible method
to choose the bandwidth should be a helpful one.

After that, I will of course also need to think what the results can
tell me beyond statistical significance.

So thanks and all best,
JZ


On Thu, Sep 23, 2010 at 8:20 PM, Austin Nichols <[email protected]> wrote:
> Jen Zhen:
>
> One drawback of RD is the same as a common drawback in RCTs, that you
> simply do not have the sample size to precisely measure an effect.
> Remember, in RD, you are estimating the discontinuity in the
> expectation of y in a narrow window around a cutoff, so even if your
> sample size is a million, the sample size in a narrow window around
> the cutoff might be 100.  You can try increasing the bandwidth, which
> in the limit becomes a simple linear regression on a treatment dummy,
> the assignment variable, and their interaction.  Assessing how the
> estimate depends on the bandwidth is a crucial step in any RD
> analysis. Ideally, the estimate is not too sensitive to bandwidth,
> since there is an inherent bias/variance tradeoff that cannot be
> uniquely solved.  See also http://ftp.iza.org/dp3995.pdf for recent
> advances in this area.
>
> On Thu, Sep 23, 2010 at 7:21 AM, nshephard <[email protected]> wrote:
>>
>> Jen Zhen wrote:
>>>
>>> Dear listers,
>>>
>>> When bootstrapping Austin Nichol's rd command:
>>>
>>> bs, reps(100): rd outcome assignment, mbw(100) ,
>>>
>>> I find that often the resulting P value tells me the estimate is not
>>> statistically significant at the conventional levels, even when visual
>>> inspection and more basic methods like simple OLS regressions on a
>>> treatment dummy, assignment and assignment squared suggest huge
>>> statistical significance.
>>>
>>> That makes me wonder whether possibly this boot-strapping method might
>>> somehow understate the true statistical significance of the effect in
>>> question? Or can and should I fully trust these results and conclude
>>> that the estimate is not statistically significant at the conventional
>>> levels?"
>>
>> What do you mean by "conventional levels [of significance]"?
>>
>> You should set your threshold for declaring statistical significance in the
>> context of your study.  Using p < 0.05 to declare something statistically
>> significant is often inappropriate.
>>
>> Often of greater interest is an estimate of the effect size (and associated
>> CI's), what do these tell you?
>>
>> see e.g. Gardner & Altman (1986)
>> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1339793/pdf/bmjcred00225-0036.pdf
>>
>>
>> Try more replications for your bootstrapping too, 100 isn't that many
>> really, try at least 1000.
>>
>> Neil
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index