Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Eric Uslaner" <euslaner@gvpt.umd.edu> |

To |
<statalist@hsphsun2.harvard.edu> |

Subject |
st: two-tailed tests |

Date |
Thu, 08 Jul 2010 22:10:03 -0400 |

I am off to Laos for almost two weeks Friday morning, but thought I would make a quick comment on the "fun" piece on one-tailed tests cited by Roger Newson. The authors write: "It is our belief, however, that one-tailed tests should not be used in place of two-tailed tests of signi*cance. One reason for this position is that the user of one-tailed tests is placed in an embarrassing position if extreme results are obtained in the direc- tion opposite to the one expected. . . . [I]n almost all situations in the behavioural sciences, extreme results in either direction are of interest. Even if results in one direction are inconsistent with a given theory, they may suggest new lines of thought. Also, theories change over time, but published results remain immutable on the printed page. Those perusing the professional journals (perhaps years after the article was written) should not be bound by what theory the original researcher happened to believe . . . [E]xtreme results in the *opposite* direc- tion invariably have important implications for the state of scienti*c knowledge. . . . If a one-tailed test is used and results are obtained in the *opposite* direction which would have been signi*cant, the experiment should be repeated before conclusions are drawn." Any graduate student who wrote this for a research design or statistics course in political science or economics might be asked to leave the program (quite justifiably). If you found an extreme result in the wrong direction, you would better be advised to check your data for errors or your model for very high levels of multicollinearity. If someone found that strong Republican party identifiers are much more likely than strong Democrats to vote for the Democratic candidate, no one would give that finding any credibility no matter what a two-tailed test showed. The same would hold for a model in economics that showed a strong negative relationship between investment in education and economic growth. Of course, those who put such faith in two-tailed tests would say: You never know. Well, you do. That's the role of theory. Now I don't know what goes on substantively (or methodologically) in the biological sciences, e.g. Seems as if many people are very much concerned with the null hypothesis. In the social sciences, we learn that the null hypothesis is generally uninteresting. When it is interesing, as in my own work on democracy and corruption, it is to debunk the argument that democracy leads to less corruption (with the notion that democracy might lead to more corruption seen as not worth entertaining seriously). So again, one would use a one-tailed test and expect that there would be no positive relation between democratization and lack of corruption. Of course, Nick is right that graphics often tell a much better story. But that is not the issue here. Two-tailed tests are largely an admission that you are going fishing. They are the statistical equivalent of stepwise regression (http://www.rand.org/pubs/papers/P4260/). Ric Uslaner * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**Re: st: two-tailed tests***From:*David Bell <dcbell@iupui.edu>

**st: RE: two-tailed tests***From:*"Mak, Timothy" <timothy.mak07@imperial.ac.uk>

- Prev by Date:
**Re: st: Popularity of R, SAS, SPSS, Stata...** - Next by Date:
**st: Referring to elements of a varlist in a program** - Previous by thread:
**st: Size Inconsistency Using CMP** - Next by thread:
**st: RE: two-tailed tests** - Index(es):