[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: MIXLOGIT: marginal effects
Steve Samuels <email@example.com>
Re: st: MIXLOGIT: marginal effects
Thu, 9 Feb 2012 17:28:45 -0500
Richard Williams said:
"I was wondering where this support for the LPM was coming from! I was wondering where this support for the LPM was coming from! However, when you compare LPM with Logit/AMEs, my experience is the AMEs are often similar to the LPM estimates, at least if the model isn't too complicated. Perhaps that is the basis of the argument for those who say if you are going to use marginal effects after logit, you might as well use LPM?"
Actually, support for LPMs has a much longer pedigree. I first saw them mentioned in DR Cox's book Analysis of Binary Data (1970). I don't have that edition any more, but the second edition, by Cox and Snell (1989), repeats the relevant section and states (p.22) that the linear and arcsine transformations of probabilities agree with the probit and logit "reasonably well when the probability of success is between 0.1 and 0.9." There is a reference to Naylor (1963 -the book says "1964", a typo). When the circumstances have been right, I've looked for well-fitting linear probability models and, occasionally, have found them. They've served well in some difference-in-difference studies where, otherwise, the target logistic parameter would be a ratio of odds ratios. Nowadays AMEs allow one to interpret findings on the probability scale, so I look to them first.
Reference: NAYLOR, A. F. (1963). Comparisons of regression constants fitted by maximum likelihood to four common transformations of binomial data*. Annals of Human Genetics, 27(3), 241–246.
On Feb 8, 2012, at 9:08 PM, Nick Cox wrote:
I can readily believe in Kit's colleague's counterexample without even
seeing it. But if sometimes being quite the wrong model to fit is a
fatal indictment, then nothing goes.
I was responding to Clive's statement "There is no justification for
the use of this model
_at all_ when regressing a binary dependent variable on a set of
regressors." I think that is too extreme. I can't readily imagine many
situations in which I would prefer a linear probability model to a
logit model, but I still think it's too extreme.
On Wed, Feb 8, 2012 at 8:22 PM, Christopher Baum <firstname.lastname@example.org> wrote:
> Clive said
> However, both of you, IMVHO, are wrong, wrong, wrong about the linear
> probability model. There is no justification for the use of this model
> _at all_ when regressing a binary dependent variable on a set of
> regressors. Pampel's (2000) excellent introduction on logistic
> regression spent the first nine or so pages carefully explaining just
> why it is inappropriate (imposing linearity on a nonlinear
> relationship; predicting values out of range; nonadditivity; etc).
> Since when was it in vogue to advocate its usage? I'm afraid that I
> don't really understand this.
> I don't understand it either, and I agree wholeheartedly with the sentiment. The undergrad textbook from which I teach Econometrics,
> Jeff Wooldridge's excellent book, has a section on the LPM; I skip it and tell students to stay away from it. Unfortunately, much of the
> buzz about the usefulness of the LPM has arisen from the otherwise-excellent book by Angrist and Pischke, Mostly Harmless
> Econometrics, in which they make strong arguments for the use of the LPM as an alternative to logistic regression.
> One of my econometrician colleagues has come up with a nifty example of how, in a very simple context involving a LPM with
> a binary treatment indicator, the LPM gets the sign wrong! A logistic regression, even though it fails to deal with any further issues
> regarding the treatment variable, gets the right sign.
* For searches and help try: