Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: st: Bootstrap error message

From   Alistair Windsor <>
Subject   Re: Re: st: Bootstrap error message
Date   Wed, 11 May 2011 11:34:01 -0500

The capture block should not be active. I rewrote the code to avoid the case where the capture was necessary. I removed my comments to keep the code small. I will try removing the capture block to see if it makes a difference. Probably poor coding the first time round.

/* capture is necessary since some project have no students on
      some semester standings and this causes a crash unless capture
      is present */

The e(b) is generated somewhere within the bootstrap procedure (more precisely bootstrap.BootStrap). The only reference that I can find to e(b) is

		- capture confirm matrix e(b) e(V)
                    - if !_rc {
                    - tempname fullmat
                    - _check_omit `fullmat', get
                    = _check_omit __000001, get
                    - local checkmat "checkmat(`fullmat')"
                    = local checkmat "checkmat(__000001)"
                    - }

which looks to me like it should do nothing if e(b) is not present. Increasing the tracedepth it seems like the error is generated in

		 = _check_omit __000001, check result(omit)

thought I have no idea what this is doing. Thanks for pointing me at tracedepth. I had tried trace-ing the error but indeed was drowning in the bootstrap code.

Given that this message is obscure is there some way of restructuring my code to avoid this?

I share your reservations about bootstrapping matching results. The Imbens paper I alluded to doesn't give justification for why you should but rather why you shouldn't.

Author: Abadie, A. Imbens, G. W.
Journal title 	
Bibliographic details 	2006, ISSU 325, pages ALL

I am also going to try bootstrapping a reweighting scheme. The Abadie and Imbens paper addresses matching on covariates which is slightly different from matching on estimated propensity scores. Nonetheless the current literature on propensity score matching either uses bootstrapping to compute standard errors or just reports the two sample t-test results. I am doing both with caveats on both. I am unsure of any proven statistically valid method for computing standard errors for propensity score matching estimators. The situation with reweighting is slightly better with regards the validity of bootstrapping but then the estimated propensity score enters as a nuisance parameter and in my case the result seems to be very sensitive to overlap and trimming.

Elizabeth Stuart wrote a paper "Matching Methods for Causal Inference:
A Review and a Look Forward" in Statistical Science 2010, Vol. 25, No. 1, 1–21 in which she said

When researchers want to account for the uncertainty in the matching, a bootstrap procedure has been found to outperform other methods (Lechner, 2002; Hill and Reiter, 2006). There are also some empirical formulas for variance estimation for par- ticular matching scenarios (e.g., Abadie and Imbens, 2006, 2009b; Schafer and Kang, 2008), but this is an area for future research.



From: Stas Kolenikov <>
Subject: Re: st: Bootstrap error message
Date: Wed, 11 May 2011 10:21:25 -0500

Alistair Windsor is bootstrapping some complicated results of a
propensity score matching procedure, and reports an obscure "e(b) not
found" message.

My reactions:

If you have a -capture-, you should have -if _rc{ }- following it with
treatment of the exception. Otherwise you just sweep the errors (that
you obviously are expecting to occur) under the carpet. (In my code, I
might leave an empty -else- and explain in the comments that no
treatment is needed, and state the reason why.)

Which part of code produces the message about missing e(b)?
Apparently, you don't refer to it directly, but some of the code (in
the -bootstrap-? in the -psmatch2-?) wants to get it. You would want
to -set trace on- and may be -set tracedepth 3- or so to see who
produces the message (increasing the tracedepth as needed if you can't
find the culprit; without the tracedepth, you can get quite deeply
into say the bootstrap code, with several nested levels of
subroutines, and it will be difficult to tell where the problem really

I have reservations about applicability of the bootstrap procedures
with matching. I can buy a bootstrap procedure when the statistic is
smooth. Matching, on the other hand, is intrinsically
non-differentiable: when you take a new subsample, you will probably
jump to a different neighbor. And jumps are bad. If somebody has a
reference to a paper by say Imbens in say Journal of Econometrics
where consistency of the bootstrap is established, I would be so-o-o

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index