[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

From |
Shige Song <shigesong@gmail.com> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
st: Frequence weight in xtlogit and xtcloglog |

Date |
Thu, 11 Aug 2005 13:08:57 +0800 |

Dear All, A very nice feature of logit and cloglog estimation command is their frequency weight ability. In estimating discrete-time survival model, one can first generate "person-year" data by expand (or stsplit) the original data set, collapse the expanded data set using strate, and estimate models using a greatly reduced data (essentially a N-way contingency table). The frequency weight makes this possible for both logit and cloglog. Last night, I realized that this was not the case for xtlogit and xtcloglog: in both cases, only "importance weight" are supported. Here is what I want to do: I have a number of (say, a hundred) randomly selected community, within each community I have time to event (say marriage) for a number of individuals, in addition to a number of other categorical covariates. Within each community, I collapse the individual-level data and generate a contigency table data using stsplit and strate. I can estimate piecewise-constant model using poisson with the "exposure" option, I can also estimate discrete-time proportional odds model using logit with fw option, or discrete-time proportional hazard model using cloglog with fw option. Now I put all 100 communities together (by appending them) and try to estimate random effect piecewise-constant, discrete-time proportional odds, and discrete-time proportional hazard model. It appears to me only random effect piecewise-constant is possible with Stata because xtpoisson does not need frequency weight. Here are my questions: 1). Is the above understanding correct? 2). Are there ways to be around this? I am especially interested in the "offset" option in both the xtlogit and xtcloglog commands, and wonder what will happen if I put the frequency variable (or logged frequency in it). I have never quite understood what offset variables did in a binary regression context. 3). Does GLLAMM have the same problem (does not allow frequency weight in binary regression)? 4). Are there other alternative approach to achieve the same goal (discrete-time hazard model using individual-level record is not possible in my case because the data set is too big.)? Thank you very much! Shige * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

- Prev by Date:
**Re: Re: RE: st: RE: constraints insufficient for unique estimate** - Next by Date:
**Re: st: Question about strate and stsplit** - Previous by thread:
**Re: RE: st: RE: constraints insufficient for unique estimate** - Next by thread:
**st: Re: svyset for stratified probability proportional to size design** - Index(es):

© Copyright 1996–2017 StataCorp LLC | Terms of use | Privacy | Contact us | What's new | Site index |