Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: maximum likelihood procedures for oprobit


From   Richard Williams <[email protected]>
To   [email protected]
Subject   Re: st: maximum likelihood procedures for oprobit
Date   Thu, 08 Dec 2005 11:22:46 -0500

At 01:29 AM 12/8/2005, Sunhwa Lee wrote:
Thanks...I see your point. The reason I used auto.dta is just to save some
time as it takes a while to converge with my dataset. I know my program
can produce the same estimates as with oprobit as long as non-dummies go.
However, if my program were written in the exactly same way as
in "oprobit" code, shouldn't the estimates be identical for the dummy
variables (that are of main interest) as well?
First off, why do you want to write your own program when Stata already has one? Even if you get your program to work, Stata's will likely be quicker, more efficient and less likely to encounter problems.

Second, while you may have the formulas right, that doesn't mean your code is the same as oprobit's. The programs go through an iterative procedure, and some procedures will work better than others.


It looks like my sample dataset will illustrate the problem better. Below
is a test code with my sample data and the data is attached. This time,
Luckily dataset was not attached!  Attachments are frowned on at Statalist.

the difference in the dummies across the three models (oprobit, ml program
with ml default tolerance levels and ml program with stata internal
tolerance levels) is more drastic. Furthermore, iteration processes are a
bit different. While oprobit converges nicely, the other two models
encounter "non concave" message during iteration.
Try adding the difficult option to your ml maximize commands, e.g.

ml maximize, difficult

According to the help, "difficult specifies that a different stepping algorithm be used in nonconcave regions. There is no guarantee that difficult will work better than the default; sometimes it is better, and sometimes it is worse. You should use the difficult option only when the default stepper declares convergence and the last iteration is "not concave", or when the default stepper is repeatedly issuing "not concave" messages and only producing tiny improvements in the log likelihood."

You could also try playing around with the technique option. Type -help maximize- for more details.

In the case of the auto data, I think the main problem was extreme multicollinearity. You had 74 cases and almost 20 dummies, with some dummies having only 1 or 2 cases coded 1. I don't know what your current data is like, but you may need to combine some categories and create fewer dummies if you want things to work well.

-------------------------------------------
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: [email protected]
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/




© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index