[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: st: RE: identifying perfect outcome predictor

From   "Feiveson, Alan H. (JSC-SK311)" <>
To   <>
Subject   RE: st: RE: identifying perfect outcome predictor
Date   Tue, 6 May 2008 08:46:47 -0500

I think there is some value in having the incomplete estimation results
available after some number of iterations because they can still be used
to form a linear classifier that completely separates the groups. Even
though the magnitude of the coefficients grows arbitrarily and standard
errors are meaningless in such cases, the relative values of the
coefficients appear stable if more than one is nonzero. I wonder if it's
possible to control the estimation when there is perfect separation by
imposing some sort of restriction such as b'b = 1?

Al Feiveson

-----Original Message-----
[] On Behalf Of Richard
Sent: Monday, May 05, 2008 6:14 PM
Subject: Re: st: RE: identifying perfect outcome predictor

At 01:13 PM 5/5/2008, Newson, Roger B wrote:
>I personally use -glm- (with the options -link(logit) family(bin)-) 
>instead of -logit-. That way, the offending parameters are allowed to 
>"converge" to plus or minus infinity without an error message. And the 
>guilty parameters are then displayed for all to read.
>I hope this helps.

Interesting.  Now, can that be considered a "bug" or a "feature" of glm?
My own oglm and gologit2 programs give these phenomenally high standard
errors when you have perfect outcome predictors, mostly because I've
never figured out how to trap such situations and issue a warning or
error message.

Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME:   (574)289-5227
EMAIL:  Richard.A.Williams.5@ND.Edu

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index