[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

From |
"Moran, John (NWAHS)" <[email protected]> |

To |
"'[email protected]'" <[email protected]> |

Subject |
RE: st: comparing survival models: Cox vs AFT |

Date |
Fri, 23 Aug 2002 17:26:48 +0930 |

```
Thanks to Ronan Conroy for his extended reply
Although over a "short" period, logistic and Cox regressions perform
comparably (a number of studies have shown this) there is advantage in using
survival analysis on the basis of "lost information" with logistic
regression.
Depending upon the data set, the ability, say, of the Cox model with TVC to
track covariate effect over time, seems worthwhile (although with large data
sets with repeated observations per patient, set-up can be a challenge, to
say the least). With the data-set to which I referred, it was also useful to
be able to compute peak hazard: the patient were those with the condition of
Adult Respiratory Distress Syndrome (ARDS); peak hazard appears to occur at
about Day 8 post diagnosis, a novel observation.
Common advice has it that if the Cox model shows non-proportionality, use
stratified or TVC Cox, but, as the latter are too complex for a clinical
audience, go straight to parametric models. However, as I originally noted,
there is somewhat of a bias FOR the Cox model in medical literature.
However, there remains the (or rather, my) original problem: having set up
two quite reasonable models, Cox with TVC and log-normal AFT (also with
TVC), the question that was posed to me (by a referee), was: which was the
better model?
Thus my query of the mechanics of doing just this.
john moran
-----Original Message-----
From: Ronan Conroy [mailto:[email protected]]
Sent: Thursday, August 22, 2002 7:22 PM
To: [email protected]
Subject: Re: st: comparing survival models: Cox vs AFT
on 22/8/02 7:48 AM, Moran, John (NWAHS) at [email protected] wrote:
> I am not quite sure as to direction here; any advice would be most
welcome.
>
> I have a multi-record per patient survival data set with 28 day (from
acute
> diagnosis) mortality as the outcome. A Cox model with (significant)
> time-varying covariates gives a "good" fit , by conventional means
(residual
> analysis etc) .
>
> A log normal AFT model (parameterized in the time-ratio sense) seems to do
a
> "good" job as well (again, by conventional diagnostics). The shape of the
> baseline Cox model hazard (using stkerhaz, recently posted) certainly has
a
> log-normal profile.
With 28-day survival, you will have complete data (or you will have in less
than a month...). For this reason, you might consider logistic regression
or, indeed, -binreg- as the first options. With 28-day survival, the shape
of the survival distribution is generally of little interest (I am guessing
that this is something like acute coronary syndrome, where there is a
significant hazard in the first 28 days). Logistic regression allows you to
estimate the effects of risk factors as odds ratios. -binreg-, on the other
hand, will try to estimate risk ratios, which are easier to interpret, since
a risk ratio is simply the ratio of two probabilities, but you aren't
guaranteed that any model will converge.
Both Cox regression and AFT models can give you hazard ratios, which are
also useful measures of the effect of risk factors, though harder to explain
properly than risk ratios.
The advantage of AFT models, and other parametric approaches such as
fractional polynomials, is that you can characterise the shape of the hazard
function. Cox regression, on the other hand, treats the shape as a high
dimensional nuisance parameter - something that just has to be got out of
the way before we do the interesting work parametrising the risk factors.
In general, if you are interested in factors which predict outcome, I would
go for simple binary models using -logistic- or -binreg- and the hell with
the shape of the survival function.
If the survival function's shape is actually interesting, then parametric
approaches allow you to characterise it, while Cox regression simply takes
it as a given, so I would opt first for simple parametric methods, and then
investigate the gain from using something like fractional polynomials. I
would beware of making a model that is more complex than the underlying
theory!
Ronan M Conroy ([email protected])
Lecturer in Biostatistics
Royal College of Surgeons
Dublin 2, Ireland
+353 1 402 2431 (fax 2329)
--------------------
Too busy fighting terror to worry about the planet? Gosh! It's hard being
President...
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
```

- Prev by Date:
**Re: st: RE: save output** - Next by Date:
**Re: st: creating composite measures** - Previous by thread:
**Re: st: comparing survival models: Cox vs AFT** - Next by thread:
**st: Problem with genhw...** - Index(es):

© Copyright 1996–2024 StataCorp LLC | Terms of use | Privacy | Contact us | What's new | Site index |