Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: interesting reference
Nick Cox <email@example.com>
RE: st: interesting reference
Thu, 30 Sep 2010 18:38:12 +0100
I could discuss many things here at length but first let me just emphasise one key point. The paper I gave at the start of this thread is written by a statistician as written as emphasising a widespread view in statistics.
An alternative which does quite well in many fields is to try to juggle respect for the data, respect for the science and respect for statistics as providing many different tools. That's an alternative to taking any one technical, statistical device and making it the be all and end all of data analysis. In addition, more attention to confidence intervals and less to P-values is a modest shift yielding much improvement.
(I don't drive and I walk to work and our household has just one car, so we try....)
Being unhappy about p-values and NHST is like being unhappy with
traffic in a city. Everybody complains about it, but nobody takes the
train. (At least, that's the story in the US, the European experiences
are very different.) There is no competing view that is ready to take
over the NHST. Of course Bayesian statistics is a strong contender,
but many things (anything that is not likelihood based -- think
Kolmogorov-Smirnov tests or instrumental variables) cannot be fit into
it. You can also entertain data mining, but even the hard core data
miners recognize the need for cross-validation and other measures to
ensure that your results are reproducible and make sense outside of
the sample -- and that's just a step away from say permutation testing
framework. Psychologists came up with p-rep, but unfortunately there
is a gross mathematical error in the very first numbered formula in
the paper that introduced it (having to do with conditional
probabilities that these authors had no clue about). Frankly, I am not
expecting the field of and practice of statistics to change due to
episodic attacks from the neighboring literature, be that ecology or
psychology. One needs to come up with a mathematically founded
paradigm that can be published in Annals of Statistics, JASA, JRSS,
Biometrika, The American Statistician and a range of books from
no-calculus undergraduate to advanced measure theory based graduate
(preferably simultaneously). And everybody must agree that it is
better than NHST, and move on to incorporate it. That's a megadeal.
It's like saying that Stata is a better package than SAS (which I
personally have no doubts about) so SAS should step out of business
and give all of its clients to Stata Corp. (and who cares what's going
to happen with the terracotta army of SAS Certified programmers;
nobody would need to remember the awkward semicolon conventions
anymore). In the long run, something like that might happen; but it
won't happen overnight, and it won't happen because some crazy Russian
said that Stata is better than SAS on a mailing list with about 50 SAS
On Thu, Sep 30, 2010 at 10:55 AM, Dale Glaser
> Greetings....This is my first missive to the Stata listserv as I just purchased Stata 11.0 having been a SPSS user since mainframe era (as well as LISREL, Mplus, HLM, etc) and thus far I am amazed at the breadth of models in one package (e.g.,ZIP, ARIMA,etc.) and the helpfulness of this listserv.
> I just wanted to add to Nick's message that the controversy in regards to NHST has been brewing in my discipline (social sciences/health care) for years, with an early paper by Rozeboom in the 60s, and then Jacob Cohens paper in American Psychologist (1994) titled: The Earth is Round, p < .05, which served as a springboard for the American Psychological Association to form a task force on statistical inference (which included those in a wise array of disciplines). Shortly thereafter an edited text by Harlow Mulaik, and Steiger (1997) titled "What if there were no significance tests" further stirred the pot. Even though much has been written about the misuse of NHST, there still seem to be many transgressions (eg., use of ubiquitous asterisks....* p = .05, ** p = .01, etc., conflation of p-value with effect size, etc). And I still see terminology such as "very significant" or "marginally significant" in published papers.
* For searches and help try: