On Tue, Apr 9, 2013 at 12:06 AM, Ching Wong
<ching.y.wong@student.adelaide.edu.au> wrote:
>
<snip>
In general I'm dubious of screening by univariate statistics before
running a model, but I guess if you need to do that David Hoaglin's
advice is spot-on: Don't set the criterion for inclusion too high.
> And I have got the following output.
>
> Iteration 1: deviance = 113.0721
> Iteration 2: deviance = 92.10798
> Iteration 3: deviance = 87.45499
> Iteration 4: deviance = 86.88055
> Iteration 5: deviance = 86.86395
> Iteration 6: deviance = 86.86393
> Iteration 7: deviance = 86.86393
> Generalized linear models No. of obs =
> 297
> Optimization : MQL Fisher scoring Residual df =
> 294
> (IRLS EIM) Scale parameter =
> 1
> Deviance = 86.86392755 (1/df) Deviance =
> .2954555
> Pearson = 311.8670508 (1/df) Pearson =
> 1.060772
Note that it's unusual for there to be such a substantial discrepancy
between Pearson chi square and deviance, which makes me think there's
something up with this model.
> In this case, I can tell var 1 is significant in the logistic
> regression model, since it has a p-value =0.006. However, how can I
> find out the odd ratio or the relative risk of this model? Did I use
> the wrong command?
-help binreg- gives the options, of which there are several including
odds ratio.
I think one could make the case for using -glm- pretty much all the
time. Next time I teach categorical I intend to push -glm- as the "one
stop shop". That's not quite true but it's pretty close and the fact
that the post-estimation is consistent is a big plus.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/