Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: [sampling in Cox model]

From   "FEIVESON, ALAN H. (AL) (JSC-SK) (NASA)" <>
To   "''" <>
Subject   st: [sampling in Cox model]
Date   Thu, 19 Jun 2003 08:35:04 -0500

This raises an interesting question. Clearly, one could take an "upstream"
that is, a purely blinded sample and run it. But is there a more efficient
way? For example, if you just used the failures, you would bias your
estimates of the coefficients, but in some sense you would gain precision.
So I'm wondering if there is a way of informative sampling (that is
purposely choosing a preponderance of failures) and somehow correcting for
bias? If you did this, would the estimates be any more accurate than if you
had just taken a noninformative sample to begin with?

Al Feiveson

-----Original Message-----
From: Nick Cox []
Sent: Wednesday, June 18, 2003 6:00 PM
Subject: st: RE: [Cox model] 

roger webb
> I need to run a Cox model on a very large cohort (of approximately
> 1.5 million subjects). Has anyone implemented a memory efficient
> routine that uses a sample from (as opposed to all) the individuals
> at risk?

Nothing to do with me, but I doubt that there 
is amy special procedure needed here. That is,
you just should take a sample upstream and then 
fit a Cox model on the sample data, I guess. 


*   For searches and help try:
*   For searches and help try:

© Copyright 1996–2019 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index