Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: too good to be true : lr test in mlogit?


From   Nick Cox <n.j.cox@durham.ac.uk>
To   "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject   RE: st: too good to be true : lr test in mlogit?
Date   Fri, 13 May 2011 12:50:28 +0100

An equivalent argument is that exactly or to a good approximation most kinds of sampling error scale inversely with the square root of sample size. At some large sample size, therefore, biases arising from selection or non-response or other measurement issues, which typically do not depend on sample size, will swamp sampling error and become what you should most worry about. In fact in many fields large surveys may have bigger measurement problems than tightly controlled small surveys, so the difference is intensified. 

A broader issue is that many statisticians, and not just Bayesians, manage without doing significance tests. See e.g. 

Nelder, John A. 1999. 


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index