Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: psmatch2 question


From   Austin Nichols <[email protected]>
To   [email protected]
Subject   Re: st: Re: psmatch2 question
Date   Wed, 25 Aug 2010 10:07:22 -0400

Steve, Anna, et al.--
The bootstrap is not a priori a good idea: http://www.nber.org/papers/t0325

But if you use nonparametric propensity scores, or equivalently a
logit with only mutually exclusive exhaustive dummies as explanatory
variables, and reweight instead of matching 1:1 or somesuch, you will
be better off in many ways; see e.g.
Hirano, K., G. Imbens, and G. Ridder. (2003). “Efficient Estimation of
Average Treatment Effects Using the Estimated Propensity Score,”
Econometrica, 71(4): 1161-1189.

The t-stat produced by -psmatch2- is not particularly reliable,
compared to one produced by a double-robust regression, say, where you
regress the outcome on treatment and other explanatory variables using
weights based on propensity scores.  But the t-stat on the ATT is
intended to guide you to reject or fail to reject the hypothesis that
the effect of treatment on those who received treatment is zero.  If
you decide to bootstrap, save each estimated ATT and its SE and see
how the matching estimator's SE compares to the observed standard
deviation of estimates; then do the same with the nonparametric
propensity score reweighting estimator and you will probably decide
not to match but to reweight.

A minor point: estimated propensity scores are never "more accurate"
than the unknown true scores, but even if you knew the true propensity
scores, you could get more efficient estimates in many cases by
throwing that information away and estimating propensity scores.  This
is why computing a SE as if the propensity scores are fixed and known
is reasonable.

Instead of the presentation, you may want the papers:
http://www.stata-journal.com/article.html?article=st0136
http://www.stata-journal.com/article.html?article=st0136_1
http://www-personal.umich.edu/~nicholsa/ciwod.pdf
http://www-personal.umich.edu/~nicholsa/erratum.pdf

On Wed, Aug 25, 2010 at 5:51 AM, Steve Samuels <[email protected]> wrote:
> --
>
> Anna,
>
> First, I must apologize. I showed a method for estimating the ATT
> (weighting by propensity scores) which is different from the one that
> -psmatch2- uses (matching on propensity scores). So my program does
> not apply to your analysis. The -help- for -psmatch2- illustrates a
> bootstrap approach to estimating the standard error, although it
> states that it is "unclear whether the bootstrap is valid in this
> context."
>
> Also, there is literature (see page 26 of Austin Nichols's presentation
> http://repec.org/dcon09/dc09_nichols.pdf) which suggests that
> estimated propensity scores might be more accurate than the unknown
> true scores; if so, then standard errors which consider the estimated
> scores as known might be better than the bootstrapped standard errors!
> So there is apparently no right answer. I think that a conservative
> approach would be to use the -bootstrap- technique shown in the
> -psmatch2- help, followed by "estat bootstrap, all" to get confidence
> intervals for ATT.
>
> ATT is one of several methods for describing the causal effect of a
> treatment: To quote Austin's presentation (p 15): "For evaluating the
> effect of a treatment/intervention/program, we may want to estimate the
> ATE for participants (the average treatment effect on the treated, or
> ATT) or for potential participants who are currently not treated (the
> average treatment effect on controls, or ATC), or the ATE across the
> whole population (or even for just the sample under study)."
>
> Best wishes
>
> Steve
>

> On Tue, Aug 24, 2010 at 7:23 PM, anna bargagliotti <[email protected]> wrote:
>> Thank you for your insights about bootstrapping.  I wiill try adjusting  your
>> code to my situation to reproduce the T-stat and compute the p-value.
>>
>> I am, however, still confused about two very simple things:
>> 1.  What is the T-stat for the ATT actually telling us?  Is this the T-stat for
>> the comparison of treatment vs control matched groups?
>> 2.  How do we determine if there is a treatment effect?

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index