Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: psmatch2 question


From   Austin Nichols <[email protected]>
To   [email protected]
Subject   Re: st: Re: psmatch2 question
Date   Wed, 25 Aug 2010 17:01:10 -0400

anna bargagliotti <[email protected]>:
No, you do not use the pscore as a pweight; read the article I linked to:
"The pair of commands generating weights [for the ATT estimate] can be
replaced by the single command
g w=cond(_tr,1,_ps/(1-_ps))
[where _tr is a treatment dummy and _ps the propensity score]..."

On Wed, Aug 25, 2010 at 3:53 PM, anna bargagliotti <[email protected]> wrote:
> Thanks Austin and Steve!  Let me restate in my own words to see if I
> understand..
>
> My goal is to compare the gpas of treatments vs a matched control. The psmatch2
> command computed propensity score for each student.  In order to produce a more
> accurate t-stat than the ATT t-stat given by the psmatch2 command, I can regress
> gpa on the treatment dummy and a set of X variables using the pscore as a
> pweight.  I do this for the whole sample (ie, those students that were treated,
> those that were matched, and those that were unmatched).
> This will produce a t-stat on the treatment dummy which will in turn give me the
> wanted comparison.
>
> Thank you very much for your help!  This has been great!
>
>
>
> ----- Original Message ----
> From: Austin Nichols <[email protected]>
> To: [email protected]
> Sent: Wed, August 25, 2010 9:07:22 AM
> Subject: Re: st: Re: psmatch2 question
>
> Steve, Anna, et al.--
> The bootstrap is not a priori a good idea: http://www.nber.org/papers/t0325
>
> But if you use nonparametric propensity scores, or equivalently a
> logit with only mutually exclusive exhaustive dummies as explanatory
> variables, and reweight instead of matching 1:1 or somesuch, you will
> be better off in many ways; see e.g.
> Hirano, K., G. Imbens, and G. Ridder. (2003). “Efficient Estimation of
> Average Treatment Effects Using the Estimated Propensity Score,”
> Econometrica, 71(4): 1161-1189.
>
> The t-stat produced by -psmatch2- is not particularly reliable,
> compared to one produced by a double-robust regression, say, where you
> regress the outcome on treatment and other explanatory variables using
> weights based on propensity scores.  But the t-stat on the ATT is
> intended to guide you to reject or fail to reject the hypothesis that
> the effect of treatment on those who received treatment is zero.  If
> you decide to bootstrap, save each estimated ATT and its SE and see
> how the matching estimator's SE compares to the observed standard
> deviation of estimates; then do the same with the nonparametric
> propensity score reweighting estimator and you will probably decide
> not to match but to reweight.
>
> A minor point: estimated propensity scores are never "more accurate"
> than the unknown true scores, but even if you knew the true propensity
> scores, you could get more efficient estimates in many cases by
> throwing that information away and estimating propensity scores.  This
> is why computing a SE as if the propensity scores are fixed and known
> is reasonable.
>
> Instead of the presentation, you may want the papers:
> http://www.stata-journal.com/article.html?article=st0136
> http://www.stata-journal.com/article.html?article=st0136_1
> http://www-personal.umich.edu/~nicholsa/ciwod.pdf
> http://www-personal.umich.edu/~nicholsa/erratum.pdf
>
> On Wed, Aug 25, 2010 at 5:51 AM, Steve Samuels <[email protected]> wrote:
>> --
>>
>> Anna,
>>
>> First, I must apologize. I showed a method for estimating the ATT
>> (weighting by propensity scores) which is different from the one that
>> -psmatch2- uses (matching on propensity scores). So my program does
>> not apply to your analysis. The -help- for -psmatch2- illustrates a
>> bootstrap approach to estimating the standard error, although it
>> states that it is "unclear whether the bootstrap is valid in this
>> context."
>>
>> Also, there is literature (see page 26 of Austin Nichols's presentation
>> http://repec.org/dcon09/dc09_nichols.pdf) which suggests that
>> estimated propensity scores might be more accurate than the unknown
>> true scores; if so, then standard errors which consider the estimated
>> scores as known might be better than the bootstrapped standard errors!
>> So there is apparently no right answer. I think that a conservative
>> approach would be to use the -bootstrap- technique shown in the
>> -psmatch2- help, followed by "estat bootstrap, all" to get confidence
>> intervals for ATT.
>>
>> ATT is one of several methods for describing the causal effect of a
>> treatment: To quote Austin's presentation (p 15): "For evaluating the
>> effect of a treatment/intervention/program, we may want to estimate the
>> ATE for participants (the average treatment effect on the treated, or
>> ATT) or for potential participants who are currently not treated (the
>> average treatment effect on controls, or ATC), or the ATE across the
>> whole population (or even for just the sample under study)."
>>
>> Best wishes
>>
>> Steve
>>
>
>> On Tue, Aug 24, 2010 at 7:23 PM, anna bargagliotti <[email protected]> wrote:
>>> Thank you for your insights about bootstrapping.  I wiill try adjusting  your
>>> code to my situation to reproduce the T-stat and compute the p-value.
>>>
>>> I am, however, still confused about two very simple things:
>>> 1.  What is the T-stat for the ATT actually telling us?  Is this the T-stat
>>for
>>> the comparison of treatment vs control matched groups?
>>> 2.  How do we determine if there is a treatment effect?
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index