Statalist The Stata Listserver

[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: RE: RE: RE: Significance stars

From   "Newson, Roger B" <>
To   <>
Subject   st: RE: RE: RE: Significance stars
Date   Sun, 18 Mar 2007 19:35:50 -0000

For what it's worth, I personally often use significance starring of
P-values according to its original rationale, ie as indicating a
footnote at the bottom of the table. In that footnote, I state that
these P-values are in the discovery set of a multiple-test procedure,
such as the Simes procedure controlling the false discovery rate at
0.05, or the Holm procedure controlling the family-wise error rate at
0.05. That rationale arguably makes sense if a lot of P-values are being
reported, and we might expect 5 percent of them to be nominally
significant (P<=0.05), even of all null hypotheses are true.


Roger Newson
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322

Opinions expressed are those of the author, not of the institution.

-----Original Message-----
[] On Behalf Of Nick Cox
Sent: 18 March 2007 16:46
Subject: st: RE: RE: Significance stars

Thanks for your testimony. Naturally, real life 
is complicated and short. I too sometimes use 
P < 0.05 as an indicator of what's worth taking
seriously, although always in combination with
other criteria. And I too sometimes compromise 
reluctantly with reviewers for the sake of getting
a paper published. 

All I can say on the last is that the Stata Journal
disapproves very mightily! 

What I find interesting is the apparent lack of 
_any_ good reason for starring. The social facts that 
many people do it and that a few people even insist on 
it are not in question. It's the rationale I seek. 


Anderson, Bradley J
> Interesting history regarding the use of * and ** and I 
> strongly agree with your comments.  Unfortunately, many 
> editors and reviewers regard a certain level of Type I error 
> (usually < .05) as a sacred criterion that defines what's 
> important, and what's not important.  And what gets published 
> and what does not get published.  Indeed, I've had editors 
> who have required us to remove p-values and confidence 
> intervals in favor of * and **.

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index