Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: RE: two-tailed tests

From   "Mak, Timothy" <>
To   "" <>
Subject   st: RE: two-tailed tests
Date   Fri, 9 Jul 2010 16:17:04 +0100

I can't remember who said this, but a test is not just a test of the null hypothesis as often written, e.g. mu == 0. Rather it is a test of the entire model as well. So in a two-tailed test, an extreme result in either direction suggests: either mu == 0 is wrong, or the model is wrong, (or both). But in a one-tailed test, you're saying that if you get extreme result in the opposite direction to you expect, you completely ignore it, as it's still "consistent" with the null hypothesis and the model. I think it is in this sense that users of the one-tailed test are in danger (if he/she takes the assumptions of the one-tailed test seriously - which of course no one does.)


-----Original Message-----
From: [] On Behalf Of Eric Uslaner
Sent: 09 July 2010 03:10
Subject: st: two-tailed tests

I am off to Laos for almost two weeks Friday morning, but thought I
would make a quick comment on the "fun" piece on one-tailed tests cited
by Roger Newson.  The authors write: 
"It is our belief, however, that one-tailed tests should
not be used in place of two-tailed tests of
signi*cance. One reason for this position is that the
user of one-tailed tests is placed in an embarrassing
position if extreme results are obtained in the direc-
tion opposite to the one expected. . . . [I]n almost
all situations in the behavioural sciences, extreme
results in either direction are of interest. Even if
results in one direction are inconsistent with a given
theory, they may suggest new lines of thought. Also,
theories change over time, but published results
remain immutable on the printed page. Those
perusing the professional journals (perhaps years
after the article was written) should not be bound by
what theory the original researcher happened to
believe . . . [E]xtreme results in the *opposite* direc-
tion invariably have important implications for the
state of scienti*c knowledge. . . . If a one-tailed test
is used and results are obtained in the *opposite*
direction which would have been signi*cant, the
experiment should be repeated before conclusions
are drawn."

Any graduate student who wrote this for a research design or statistics
course in political science or economics might be asked to leave the
program (quite justifiably).  If you found an extreme result in the
wrong direction, you would better be advised to check your data for
errors or your model for very high levels of multicollinearity.  If
someone found that strong Republican party identifiers are much more
likely than strong Democrats to vote for the Democratic candidate, no
one would give that finding any credibility no matter what a two-tailed
test showed.  The same would hold for a model in economics that showed a
strong negative relationship between investment in education and
economic growth.  Of course, those who put such faith in two-tailed
tests would say: You never know.  Well, you do.  That's the role of

Now I don't know what goes on substantively (or methodologically) in
the biological sciences, e.g.  Seems as if many people are very much
concerned with the null hypothesis.  In the social sciences, we learn
that the null hypothesis is generally uninteresting.  When it is
interesing, as in my own work on democracy and corruption, it is to
debunk the argument that democracy leads to less corruption (with the
notion that democracy might lead to more corruption seen as not worth
entertaining seriously).  So again, one would use a one-tailed test and
expect that there would be no positive relation between democratization
and lack of corruption.

Of course, Nick is right that graphics often tell a much better story. 
But that is not the issue here.  Two-tailed tests are largely an
admission that you are going fishing. They are the statistical
equivalent of stepwise regression

Ric Uslaner

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2016 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index