Statalist The Stata Listserver


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: RE: Validity of hazard predictions in frailtymodels


From   "Maarten Buis" <M.Buis@fsw.vu.nl>
To   <statalist@hsphsun2.harvard.edu>
Subject   st: RE: Validity of hazard predictions in frailtymodels
Date   Tue, 29 May 2007 15:14:52 +0200

---- Steinar Fossedal wrote:
> I have fitted an exponential survival model with gamma frailty, and now
> find myself in a pickle trying to interpret and apply the predictive
> results. Specifically I'm getting predictions of individual hazard that
> exceeds one. Such estimates are, of course, problematic to use in the
> next step of my analysis.

Actually you have no problem. Hazards are not the same as probabilities, 
and can range between 0 and +infinity. The interpretation of a hazard is 
the number of times in a unit time that you can on average be expected 
to experience the event. Say the event is experiencing a cold. My hazard 
for experiencing a cold when time is measured in months is probably less 
than 1, but if time is measured in centuries it will clearly be larger 
than one (until someone invents the cure for the common cold). 

> I reason that this is caused by the multiplicative effect of the frailty
> parameter on the hazard, and that the model only ensures the validity of
> the population hazard values - not the unobserved individual's. With
> validity I mean restrictions that ensure the hazard stays between zero
> and one. To me, this seems like an inherent weakness in frailty models.
> This may not matter much when investigating hazard ratios and
> differences between populations, but it does pose a problem when making
> individual predictions

Actually the model without frailty component (possibly with robust 
standard errors) is the model that correctly looks at population values 
of the hazard, while the model with frailty component captures the hazards 
at the individual level (if you believe your model, i.e. the unobserved 
component of your model is gamma distributed, uncorrelated with your 
observed variables, the effects of the observed variables are correctly 
specified, etc. etc. It is no coincidence that robust standard errors are
called robust, thus implying that other models are less robust. (However,
I still think that the name robust suggests more robustness than it can 
deliver. But that is another issue))

Hope this helps,
Maarten 

-----------------------------------------
Maarten L. Buis
Department of Social Research Methodology 
Vrije Universiteit Amsterdam 
Boelelaan 1081 
1081 HV Amsterdam 
The Netherlands

visiting address:
Buitenveldertselaan 3 (Metropolitan), room Z434 

+31 20 5986715

http://home.fsw.vu.nl/m.buis/
-----------------------------------------



*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index