Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: too good to be true : lr test in mlogit?


From   Maarten Buis <maartenlbuis@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: too good to be true : lr test in mlogit?
Date   Fri, 13 May 2011 12:04:31 +0200

On Fri, May 13, 2011 at 10:45 AM, John Litfiba wrote:
> I would be definitively interested if by chance you have in mind a
> paper that discuss the large sample side effects on P-values that you
> mention

The argument is straightforward: with larger sample sizes we are able
to detect smaller and smaller effects. If we included a variable in
our model than it is extremely implausible that the effect of that
variable is exactly zero (i.e. the null hypothesis is true). If we
reject the null hypothesis is rejected that the effect was so small
that the dataset was not large enough to detect it. So by getting ever
larger samples we will start to find ever more effect, but they will
be so small that they are substantively irrelevant (even though they
are statistically "significant").

One discussion of this is:
Raftery, Adrian E. 1995. "Bayesian Model Selection in Social
Research." Sociological Methodology, 25: 111-163.
Also see the responses to this article that appeared in the same issue.

Hope this helps,
Maarten

--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany


http://www.maartenbuis.nl
--------------------------
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index