Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: confidence intervals overlap [was: positive interaction - negative covariance]


From   David Hoaglin <dchoaglin@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: confidence intervals overlap [was: positive interaction - negative covariance]
Date   Wed, 6 Mar 2013 20:58:55 -0500

Dirk,

I don't agree that that warning goes too far.  It is consistent with
the advice given by both Schenker and Gentlemen (2001) and Cumming and
Finch (2005).

Readers who are not familiar with the paper by Schenker and Gentleman
might interpret the material that you quoted as an endorsement of
judging significance according to overlap.  In fact, the quoted
material appears at the beginning of the first paragraph of the
introduction.  That paragraph concludes as follows: "Over a number of
years, however, we have tried to discourage the practice of examining
overlap to assess significance among authors of articles that we have
reviewed, because the procedure can lead to mistaken conclusions."

The paper by Cumming and Finch is highly useful.  I'm glad you called
attention to it.  As they make clear, their "Rules of Eye" are
intended to help readers when an article includes, for example, a
figure that shows confidence intervals for the means of two
independent samples and does not show a confidence interval for the
difference between those means.  On page 174 they say, "By analogy
with rules of thumb, our rules are intended to be useful heuristics or
pragmatic guidelines,  They are not intended to be numerically exact
or to replace statistical calculations: If exact p values are desired,
they should also be presented."

David Hoaglin

On Tue, Feb 26, 2013 at 9:55 AM, Dirk Enzmann
<dirk.enzmann@uni-hamburg.de> wrote:
> David points to the problem of judging the significance of differences of
> independent estimates by comparing their confidence intervals. However, I
> don't think that "it is usually a mistake to compare their confidence
> intervals". This warning goes far too far: As long as the estimates are
> independent, graphs of confidence intervals are a very useful tool to assist
> "inference by eye".
>
> Schenker & Gentleman (2001, p. 182) write: "To judge whether the difference
> between two point estimates is statistically significant, data analysts
> sometimes examine the overlap between the two associated confidence
> intervals. If there is no overlap, the difference is judged significant, and
> if there is overlap, the difference is not judged significant."
>
> The problem is that the "no overlap" rule is naive. However, the attempt to
> judge the significance by examining the overlap of two confidence intervals
> is (nearly) perfectly valid if you adjust the rule to: "As long as the
> confidence intervals to not overlap by more than half of the average arm
> length, the difference is significant." More precisely: If n per sample >=
> 10, half arm length overlap corresponds to p about .05, touching arms
> correspond to p about .01. For a detailed discussion see Cumming & Finch
> (2005). Not the *overlap* of confidence intervals but the *amount* of
> overlap is what the reader of graphs showing confidence intervals of several
> group means should take into account.
>
> Applying this rule of thumb to the example given by Schenker & Gentleman
> (2001, p. 183) clearly shows that both proportions differ signficantly
> (somewhere between p < .05 and p < .01), in fact p = .017:
>
> * ==== Start Stata commands: ===========================================
>
> * Example "Comparing Proportions" (Schenker & Gentleman, 2001, p. 183)
>
> input samp y freq
> 1 0  88
> 1 1 112
> 2 0 112
> 2 1  88
> end
> logistic y samp [fw=freq]
>
> mat meanci = J(2,4,.)
> forvalues i = 1/2 {
>   ci y if samp==`i' [fw=freq], b w
>   mat meanci[`i',1] = r(mean)
>   mat meanci[`i',2] = r(lb)
>   mat meanci[`i',3] = r(ub)
>   mat meanci[`i',4] = `i'
> }
>
> svmat meanci
> label variable meanci1 "proportion"
> label variable meanci2 "ci_l"
> label variable meanci3 "ci_u"
> label variable meanci4 "sample"
>
> twoway (scatter meanci1 meanci4 in 1/2) ///
>        (rcap meanci2 meanci3 meanci4 in 1/2), ytitle("y") ///
>        ylab(, angle(0)) xscale(range(0.5 2.5)) xlabel(1(1)2) ///
>        title("Example Schenker & Gentleman (95% Wilson CIs)", ///
>        size(medium)) legend(cols(1) position(4))
>
> * ==== End Stata commands. =============================================
>
> Of course, our eyes tend to tell us lies, this simple rule of thumb is not
> exact, and as with every rule of thumb there are problems, such as issues of
> efficiency, multiple testing, etc. But when using the "half arm length rule"
> instead of the "naive rule" (as considered by Schenker & Gentleman),
> inference by eye is too useful to be thrown out with the bathwater.
>
> It is necessary to point to the general misconception and popular fallacy
> that confidence intervals should not overlap if the difference is
> statistically significant. When read properly, figures with confidence
> intervals are useful for inferential purpose.
>
> References:
>
> Cumming, G. & Finch, S. (2005). Inference by eye. Confidence intervals and
> how to read pictures of data. American Psychologist, 60, 170-180.
> http://www.apastyle.org/manual/related/cumming-and-finch.pdf
> http://psycnet.apa.org/journals/amp/60/2/170/
>
> Schenker, N. & Gentleman J. F. (2001). On judging the significance of
> differences by examining the overlap between confidence intervals. The
> American Statistician, 55, 182-186.
> http://www.tandfonline.com/doi/abs/10.1198/000313001317097960
>
> Dirk
>
> Sat, 23 Feb 2013 15:34:39 -0500, David Hoaglin <dchoaglin@gmail.com> wrote:
>>
>> Subject: Re: st: positive interaction - negative covariance
>>
>> The problems that arise from trying to compare confidence intervals
>> are more general.  They arise in situations where the estimates are
>> independent.  Thus, the covariance in the sampling distribution of b1
>> and b3 is not the real issue.
>>
>> To assess the difference between two estimates, it is usually a
>> mistake to compare their confidence intervals.  The correct approach
>> is to form the appropriate confidence interval for the difference and
>> ask whether that confidence interval includes zero.  I often encounter
>> people who think that they can determine whether two estimates (e.g.,
>> the means of two independent samples) are different by checking
>> whether the two confidence intervals overlap.  They are simply wrong.
>> The article by Schenker and Gentleman (2001) explains.  (I said
>> "usually" above to exclude intervals that are constructed specifically
>> for use in assessing the significance of pairwise comparisons.)
>>
>> David Hoaglin
>>
>> Nathaniel Schenker and Jane F. Gentleman, On judging the significance
>> of differences by examining the overlap between confidence intervals.
>> The American Statistician 2001; 55(3):182-186.
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index