Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: st: newbie question: nonsig posthoc after sig anova


From   Roger Newson <[email protected]>
To   [email protected]
Subject   RE: st: newbie question: nonsig posthoc after sig anova
Date   Mon, 18 Aug 2003 19:23:42 +0100

At 19:02 18/08/03 +0100, Nick Cox wrote:

I have a more general question arising obliquely
out of these issues.

The point of these multiple comparison procedures,
Bonferroni, Scheffe, Sidak, etc. (and sprinkle all
the accents required on those names) is, as I
understand it, to inject a strong note of
caution given the number of individual tests
you could carry out and the built-in tendency
that the more you carry out, the more are likely to attain
significance at some conventional level, and so forth.

What is the attitude to fishing _among_ multiple
comparison procedures, i.e. looking _among_ various
different post hoc results with the pitfall that
you're tempted to report the one closest to your
pre-conceived (ne)science?

Aren't you supposed to cleave the one whose
inferential logic you find most compelling?

Is this a documented issue?
If you use a multiple-test procedure, then it should be chosen a priori, rather than by fishing among multiple comparison procedures to see which gives the answer you most like. In general, statistical theory assumes that scientists first decide what they want to measure, and then measure it. The validity of confidence regions stands or falls by that assumption. And this is still true if the confidence region is for a non-numeric, set-valued parameter, such as "the set of null hypotheses that are true".

Different multiple-test procedures are appropriate under different assumptions, eg non-negative correlation between P-values or arbitrary correlation between P-values. And different procedures allow confidence statements about different things. Family-wise error rate (FWER)-controlling procedures allow you to be 95% confident that, *if* any discoveries are made, *then* all of them will be true. False discovery rate (FDR)-controlling procedures allow you to be 95% confident that some of the discoveries are true, or 90% confident that most of them are true. Therefore, FDR-controlling procedures are more appropriate for selecting a shortlist of candidates for future work, whereas FWER-controlling procedures are more useful for pointing a finger at a definitive culprit. However, either way, a scientist should decide what method or methods to use, and then use them. And, ideally, the Methods section of a paper should be written before the Results are known.

I hope this helps.

Best wishes

Roger


--
Roger Newson
Lecturer in Medical Statistics
Department of Public Health Sciences
King's College London
5th Floor, Capital House
42 Weston Street
London SE1 3QD
United Kingdom

Tel: 020 7848 6648 International +44 20 7848 6648
Fax: 020 7848 6620 International +44 20 7848 6620
or 020 7848 6605 International +44 20 7848 6605
Email: [email protected]
Website: http://www.kcl-phs.org.uk/rogernewson

Opinions expressed are those of the author, not the institution.

*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/




© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index