Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | "Mak, Timothy" <timothy.mak07@imperial.ac.uk> |
To | "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> |
Subject | st: RE: two-tailed tests |
Date | Fri, 9 Jul 2010 16:17:04 +0100 |
I can't remember who said this, but a test is not just a test of the null hypothesis as often written, e.g. mu == 0. Rather it is a test of the entire model as well. So in a two-tailed test, an extreme result in either direction suggests: either mu == 0 is wrong, or the model is wrong, (or both). But in a one-tailed test, you're saying that if you get extreme result in the opposite direction to you expect, you completely ignore it, as it's still "consistent" with the null hypothesis and the model. I think it is in this sense that users of the one-tailed test are in danger (if he/she takes the assumptions of the one-tailed test seriously - which of course no one does.) Tim -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Eric Uslaner Sent: 09 July 2010 03:10 To: statalist@hsphsun2.harvard.edu Subject: st: two-tailed tests I am off to Laos for almost two weeks Friday morning, but thought I would make a quick comment on the "fun" piece on one-tailed tests cited by Roger Newson. The authors write: "It is our belief, however, that one-tailed tests should not be used in place of two-tailed tests of signi*cance. One reason for this position is that the user of one-tailed tests is placed in an embarrassing position if extreme results are obtained in the direc- tion opposite to the one expected. . . . [I]n almost all situations in the behavioural sciences, extreme results in either direction are of interest. Even if results in one direction are inconsistent with a given theory, they may suggest new lines of thought. Also, theories change over time, but published results remain immutable on the printed page. Those perusing the professional journals (perhaps years after the article was written) should not be bound by what theory the original researcher happened to believe . . . [E]xtreme results in the *opposite* direc- tion invariably have important implications for the state of scienti*c knowledge. . . . If a one-tailed test is used and results are obtained in the *opposite* direction which would have been signi*cant, the experiment should be repeated before conclusions are drawn." Any graduate student who wrote this for a research design or statistics course in political science or economics might be asked to leave the program (quite justifiably). If you found an extreme result in the wrong direction, you would better be advised to check your data for errors or your model for very high levels of multicollinearity. If someone found that strong Republican party identifiers are much more likely than strong Democrats to vote for the Democratic candidate, no one would give that finding any credibility no matter what a two-tailed test showed. The same would hold for a model in economics that showed a strong negative relationship between investment in education and economic growth. Of course, those who put such faith in two-tailed tests would say: You never know. Well, you do. That's the role of theory. Now I don't know what goes on substantively (or methodologically) in the biological sciences, e.g. Seems as if many people are very much concerned with the null hypothesis. In the social sciences, we learn that the null hypothesis is generally uninteresting. When it is interesing, as in my own work on democracy and corruption, it is to debunk the argument that democracy leads to less corruption (with the notion that democracy might lead to more corruption seen as not worth entertaining seriously). So again, one would use a one-tailed test and expect that there would be no positive relation between democratization and lack of corruption. Of course, Nick is right that graphics often tell a much better story. But that is not the issue here. Two-tailed tests are largely an admission that you are going fishing. They are the statistical equivalent of stepwise regression (http://www.rand.org/pubs/papers/P4260/). Ric Uslaner * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/