Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: Re: Brant test


From   Gao LIU <[email protected]>
To   <[email protected]>
Subject   RE: st: Re: Brant test
Date   Sat, 18 Oct 2008 13:09:28 -0400

Thank you, all. It has been very helpful. Best Gao


-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Maarten buis
Sent: Saturday, October 18, 2008 12:05 PM
To: [email protected]
Subject: RE: st: Re: Brant test

In the series of posts starting here Richard and I were exploring the
properties of the Brant test and -omodel-:
http://www.stata.com/statalist/archive/2008-04/msg00660.html

I think that the main conclusion was that these tests don't do too well
in the presence of small categories (I initially attributed it to the
number of categories, but Rich later pointed out that this had more to
do with the fact that my simulation created a number of very sparse
categories). 

The easiest way of handling small categories is to merge them with
neighboring categories. If that is not an option you might try to
bootstrap -omodel- to get an empirical estimate of the sampling
distribution of the test statistic and use that to compute the p-value,
like in the example below:

*------------------ begin example ----------------------
sysuse auto, clear
recode rep78 1/2=3
omodel logit rep78 foreign mpg
capture program drop bootomodel
program define bootomodel, rclass
	omodel logit rep78 foreign mpg
	return scalar chi2 = $S_1
end

keep if !missing(rep78, foreign, mpg)
tempfile res
bootstrap chi2=r(chi2), saving(`res') reps(10000): bootomodel
use `res', clear
local chi2 = _b[chi2]
hist chi2
count if chi2 > `chi2' & chi2 < .
local rej = r(N)
coun if chi2 < .
di "p is " `rej'/r(N)
*-------------------------- end example -----------------------
(For more on how to use examples/simulations I sent to the 
Statalist, see http://home.fsw.vu.nl/m.buis/stata/exampleFAQ.html )

Hope this helps,
Maarten

--- Richard Williams <[email protected]> wrote:

> At 08:24 AM 10/18/2008, Gao LIU wrote:
> >The results are similar, but not the same. What might cause the
> difference?
> >
> >Gao
> 
> Brant and gologit2 take different approaches.  They generally are 
> pretty close on the global test but might differ on tests of 
> individual variables.  You can also try out -omodel- from SSC.
> 
> The way to do a global test in gologit2 is something like
> 
> gologit2 y x1 x2 x3, pl store(m1)
> gologit2 y x1 x2 x3, npl store(m2)
> lrtest m1 m2, stats
> 
> For more on gologit2, see
> 
> http://www.nd.edu/~rwilliam/gologit2/index.html


-----------------------------------------
Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands

visiting address:
Buitenveldertselaan 3 (Metropolitan), room N515

+31 20 5986715

http://home.fsw.vu.nl/m.buis/
-----------------------------------------

Send instant messages to your online friends http://uk.messenger.yahoo.com 
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index