Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: confidence intervals for ratio of predictions-- bootstrap vs. parametric methods?


From   "Daniel Waxman" <[email protected]>
To   <[email protected]>
Subject   st: confidence intervals for ratio of predictions-- bootstrap vs. parametric methods?
Date   Wed, 10 Oct 2007 20:05:03 -0400

Statalist,

A couple of months ago, Maarten Buis was kind enough to answer my questions:
How to correctly use -predictnl- to calculate confidence intervals for a
ratio of two adjusted predictions  (i.e. relative risk) after logistic
regression.    To paraphrase his solution with my situation:

. global mbpos_predictors _b[_cons] + _b[int_zlog_pos]*zlog  + /*
                       */ _b[int_zero_pos]*zero +  /*
  					   */ _b[zlog]*zlog  + _b[zero]*zero
+ _b[mbpos] 
		
. global mbneg_predictors _b[_cons] + _b[zlog]*zlog + _b[zero]*zero

. predictnl
rr=invlogit($mbpos_predictors)/invlogit($mbneg_predictors),se(se_rr)

. gen ub = rr + 1.96*se_rr
. gen lb = rr - 1.96*se_rr 
		

The problem is that these confidence intervals appear unreasonably wide, and
the lower bound can be negative, which is nonsensical.

So I did a bootstrap of the following program, and the bias-corrected
bootstraps give much happier results:
(note that I am bootstrapping the relative risk at a specific value of the
covariates (zero=1, zlog=`1') )

. program newboot, rclass

. local median_zlog=log10(`1')
. global mbpos_predictors _b[_cons] + _b[int_zlog_pos]*`median_zlog'  + /*
*\ _b[int_zero_pos] + _b[zlog]*`median_zlog'  + _b[zero] + _b[mbpos]

. global mbneg_predictors _b[_cons] + _b[zlog]*`median_zlog' + _b[zero] 
. logistic outcome zlog zero mbpos int_zlog_pos int_zero_pos
. assert e(sample)==1
. return scalar
rr_posneg=invlogit($mbpos_predictors)/invlogit($mbneg_predictors)
end

The sample sizes range from 1500 to 6000, with an event rate of ~ 1.5% -
3.5% depending on the population.  Most subjects are at low risk (i.e. the
distribution of the predictors is highly skewed).  
Example of the different results: parametric: 3.1 (0.7 ,5.4); bc-bootstrap:
3.7 (2.2,7.4)  

My questions: 
1.  Can anybody explain why the results are so different, and whether the
bias-corrected bootstraps can reasonably be thought to be much closer to the
truth?  
2.  If reporting parametric CIs, what to do when the results get negative?
3.  In one of my subpopulations, the bootstrap process returned a few red
'x's instead of dots, meaning, I think that in some samples, one of the
covariate patterns didn't exist or the regression couldn't be performed.  No
bc-CI was calculated.  Any thoughts on the real meaning of this?  (are the
CIs truly infinite?)

Thanks.

Daniel Waxman
 

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index