Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: two-tailed tests


From   David Bell <[email protected]>
To   <[email protected]>
Subject   Re: st: two-tailed tests
Date   Fri, 9 Jul 2010 11:52:27 -0400

Statistical tests perform many functions.

When one-tailed is theoretically/philosophically justified: In a Popperian theory-testing world, any result other than positive significance means the theory is disconfirmed.  So in this case, a one-tailed test is exactly appropriate. (Popper, Karl R. 1965. Conjectures and refutations: The growth of scientific knowledge. New York: Harper & Row.)

When one-tailed is NOT theoretically/philosophically justified:  A two-tailed test implies meaning in both tails.  In applied studies that test outcomes, a positive result is taken to mean the drug works, non-significance is taken to mean that the drug does not work (actually, that there is not evidence that the drug works), but a significant negative result means that the drug is actively harmful.  In this case, a one-tailed test would have obscured the harmfulness of the drug by lumping harm with non-effectiveness.. 

Remember that the purpose of a statistical test to the scientific community as a sociological community is to protect the community from the enthusiasm of researchers.  If a researcher is going to tell us his/her theory is true, we want the chance of being fooled (Type I error, false positive) to be strictly limited.  We as readers and consumers don’t worry about disappointment of a researcher with a good theory but bad data (Type II error, false negative). 

As a practical matter, many journals insist on two-tailed tests for several reasons.  One reason is because two-tailed tests are conservative (even though the stated probability level is inaccurate, it means that the researcher has only half the chance to fool us).  Another is to discourage researchers from “cherry-picking” close calls – e.g., reporting one-tailed .05 significance instead of two-tailed .10 (“marginal”) significance.  Another is that, in the endeavor of science, one result is relatively unimportant, so conservative is better for the overall process of science.
Dave
====================================
David C. Bell
Professor of Sociology
Indiana University Purdue University Indianapolis (IUPUI)
(317) 278-1336
====================================


On Jul 8, 2010, at 10:10 PM, Eric Uslaner wrote:

> ... If you found an extreme result in the
> wrong direction, you would better be advised to check your data for
> errors or your model for very high levels of multicollinearity.  If
> someone found that strong Republican party identifiers are much more
> likely than strong Democrats to vote for the Democratic candidate, no
> one would give that finding any credibility no matter what a two-tailed
> test showed.  The same would hold for a model in economics that showed a
> strong negative relationship between investment in education and
> economic growth.  Of course, those who put such faith in two-tailed
> tests would say: You never know.  Well, you do.  That's the role of
> theory.
> 
> Now I don't know what goes on substantively (or methodologically) in
> the biological sciences, e.g.  Seems as if many people are very much
> concerned with the null hypothesis.  In the social sciences, we learn
> that the null hypothesis is generally uninteresting.  When it is
> interesing, as in my own work on democracy and corruption, it is to
> debunk the argument that democracy leads to less corruption (with the
> notion that democracy might lead to more corruption seen as not worth
> entertaining seriously).  So again, one would use a one-tailed test and
> expect that there would be no positive relation between democratization
> and lack of corruption.
> 
> Of course, Nick is right that graphics often tell a much better story. 
> But that is not the issue here.  Two-tailed tests are largely an
> admission that you are going fishing. They are the statistical
> equivalent of stepwise regression
> (http://www.rand.org/pubs/papers/P4260/).
> 
> Ric Uslaner
> 
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/


*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index