Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

re: Re: Re: Re: st: replicating 2 X 2 data from a paper

From   "Ariel Linden, DrPH" <>
To   <>
Subject   re: Re: Re: Re: st: replicating 2 X 2 data from a paper
Date   Fri, 5 Apr 2013 09:43:59 -0400

I think that the dialogue on this topic has been "robust", with great
insights posited by Nick, Clyde and David.

As authors and reviewers with statistical/methodological expertise, we bring
a certain perspective that is perhaps more demanding than others. I agree
that methods and statistics should be clearly described so that the reader
doesn't have to resort to asking his/her friends for help in deciphering the
paper. Moreover, there should be some commonly accepted practices within the
general framework of analysis and reporting. The CONSORT statement
( is an excellent starting
point, but it does not elaborate on the statistical methods. This is an area
for which guidelines should be created. 

As an additional example to David's below, I just read an article about an
RCT, that shows that the groups were comparable on baseline characteristics.
There was no treatment effect on any outcome, when using unadjusted methods
(treatment assignment as the only covariate). However, the authors then ran
different models using all the baseline characteristics as covariates, and
now the results were statistically significant. The authors never reconciled
or described their theory as to why in an RCT they got such different
results, with and without covariates. However, after achieving their desired
positive effect with the covariate adjustment, the rest of the paper talks
about how wonderful the intervention was and there is no further mention
about the earlier null effect...

Yes, it would be good to have statistical/methodological reviewers who
focused only on these important issues... At the end of the day, the only
thing that most readers will take away from such an article is "the
intervention worked" 


Date: Thu, 4 Apr 2013 13:53:40 -0400
From: David Hoaglin <>
Subject: Re: Re: Re: st: replicating 2 X 2 data from a paper


Thanks for your perspective.

I was criticizing only rough edges and errors that I found in the
articles themselves.  The absence of adequate documentation is another
matter entirely.  For some time, the guidelines of the International
Committee of Medical Journal Editors have advised that the statistical
methods should be documented in sufficient detail that a knowledgeable
person with access to the original data could reproduce the results.
I agree that published articles hardly ever have space to do this, and
the gap has widened as more-sophisticated methods have come into use.
But, as you point out, an online appendix provides all the space
needed.  (Some months ago I read an article that had appeared in 2010
in the New England Journal of Medicine.  One appendix was 150 pages
long, and the protocol for the study was in a separate appendix.)  It
seems to me that authors no longer have any excuse for not documenting
their work adequately.

Taking Nick's comment about a "smokescreen" a step further, one
article that I read introduced its own statistical technique, with no
pedigree in the statistical literature.  In support of that technique,
the article cited four papers, not one of which provided any support
for the authors' approach.  Indeed, one of the papers said that
approaches like the authors' should not be used.  My conclusion was
that the authors had not read any of those papers and were citing them
to deceive reviewers.  I was not a reviewer of that article; it had
been published in a "peer-reviewed" journal.

David Hoaglin

*   For searches and help try:

© Copyright 1996–2016 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index