Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

# Re: st: deriving a bootstrap estimate of a difference between two weighted regressions

 From "Ariel Linden, DrPH" To Subject Re: st: deriving a bootstrap estimate of a difference between two weighted regressions Date Tue, 3 Aug 2010 11:20:04 -0700

Thank you Stas and Steve for your comments!

When I stated that the first model's weight would be ATT and the next ATC,
it was already after running the propensity score model and establishing the
weights for each subject:
ATT = cond(treatvar, 1, propvar/(1- propvar)), and
ATC = cond(treatvar, (1-propvar)/propvar, 1)

Under these conditions, there should be no negative weights, so that is not
a concern.

I am thinking that the code would look something like this, but I would
appreciate your input:

1. bootstrap _b[treatvar] from first regression with [pw=ATT]
2. save 10,000 samples to file (or tempfile)
3. bootstrap _b[treatvar] from second regression with [pw=ATC]
4. save 10,000 samples to file (or tempfile)
5. gen difference = treatvar1-treatvar2
6. bootstrap r(mean): sum  difference, to get bootstrapped CIs

Does this make sense?

Can you provide some simple code to output the bootstrapped samples from
model 1 and 2 into a file (either stored or virtual) where I can then
generate the difference variable?

Thanks

Ariel

Date: Mon, 2 Aug 2010 09:11:50 -0500
From: Stas Kolenikov <skolenik@gmail.com>
Subject: Re: st: deriving a bootstrap estimate of a difference between two
weighted regressions

In what you describe below, the weights are not part of your data, but
rather are derived variables used as means to get the estimates (see
Steve's comments: aweights is not the right Stata concept to use here;
I completely agree with him). Hence, if you insist on the bootstrap,
an appropriate procedure that would replicate the analysis process on
the original sample would be:

1. take the bootstrap sample
2. run your propensity/matching/covariate adjustment model
3. compute the weights
4. compute the treatment effect estimate(s) using these weights
5. run 1-4 a large number of times.

As always with the bootstrap, I won't buy this procedure until I see
the proof of consistency published in Biometrika or J of Econometrics.
If you are just manipulating the means and other moments of the data
in the re-weighting procedure, you are probably OK; if you are doing
matching, you are certainly not OK, as matching is not a smooth
operation. If you have a complex sampling procedure, you can probably
just forget about getting the standard errors right as even the first
step, getting a bootstrap sample that would resemble the complex
sample at hand, is far from trivial. (In sum: the bootstrap is a great
method when you are conducting inference for the mean; everything else
is complicated.)

I would say that using the difference in weights that Steve suggested
is certainly an easier thing to do, although who knows how each
particular command will interpret the negative weights. It might also
be possible to get non-positive definite covariance matrix of the
coefficient estimates if weights are not all positive.

Also, the more sensitivity analyses you run, the far off your overall
type I error is going to be.

On Sun, Aug 1, 2010 at 12:39 PM, Ariel Linden, DrPH
<ariel.linden@gmail.com> wrote:
> There are at least two conceptual reasons why this process makes sense.
>
> First, assume a causal inference model which uses a weight (let's say an
> "average treatment on the treated" weight) to create balance on observed
> pre-intervention covariates (by setting the covariates to equal that of
the
> treated group). Let's say the second model is identical but uses an
"average
> treatment on controls" (ATC) weight. Assuming no unmeasured confounding,
the
> treatment variable(s) from both models will provide the treatment effect
> estimate given the respective weighting purposes (holding covariates to
> represent treatment or control group characteristics). Thus, measuring the
> difference between the treatment effects in both models (which will need
to
> have either bootstrapped or other adjustment to the SE) can serve as a
> sensitivity analysis (one of many approaches).
>
> Second, and in a similar manner, one can test the effect of a mediator
using
> a weighting method for the original X-Y model, and second weight for the
> X-M-Y model. In both cases, different weights must be applied to two
> different regression models, and in both cases, the SE's will need to be
> adjusted. Weights are used in these models in a similar context to those
in
> the first example - to control for confounding.
>
> By the way, a user written program called sgmediation (search sgmediation)
> does something similar to this but without the weights, so it may be
> possible to replicate many of the steps (or add weights?).

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index