Statalist The Stata Listserver


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: Query for Statalist on kappa


From   Meredith Makeham <[email protected]>
To   [email protected]
Subject   st: Query for Statalist on kappa
Date   Sun, 17 Dec 2006 17:10:19 +1100

I am not a statistician and would be grateful for some advice with the following problem.

I some questions about comparing agreement between coders and the kappa statistic.

I have a set of data which has been independently coded by three people using a two different classifications.

Both classifications are basically the same structure, with a branching three level set up, where the top level has three possible categories, then these branch at the second into a total of 8 possible categories, and these 8 branch at the third level into a total of 36 possible categories. There is better agreement at the top level than the second, and at the second than the third, as opinions vary more as the coders move into the subcategories.

The first classification was tested using 433 cases, and the second using 132 cases, both with the same 3 coders. The 132 cases are a subset of the original 433 cases used in classification 1.

Basically each case examined was assigned a three level code by each coder, using classification 1, then classification 2.

I want to present an argument that the second classification has a higher agreement amongst the three coders than the first, as the agreement appears much better to my (not very highly trained in stats) eye (see following summary)

Code 1
percent agreement for coder 1 and 2: level 1 = 82%, level 2 = 60%, level 3 = 45%
percent agreement for coder 1 and 3: level 1 = 79%, level 2 = 53%, level 3 = 42%
percent agreement for coder 2 and 3: level 1 = 80%, level 2 = 57%, level 3 = 41%

Code 2
percent agreement for coder 1 and 2: level 1 = 94%, level 2 = 80%, level 3 = 73%
percent agreement for coder 1 and 3: level 1 = 92%, level 2 = 79%, level 3 = 70%
percent agreement for coder 2 and 3: level 1 = 95%, level 2 = 76%, level 3 = 63%


I have managed to produce a kappa statistic (using the kap command), but I am unsure how to compare the statistic for each classification at each of the three levels of the code (or how to describe the results I am getting in plain English for my paper, which has a medical audience).


The results look like this for the bottom line of output, which is "combined":
Code 1
level 1: kappa 0.5947, Z = 25.53, Prob>Z 0.0000
level 2: kappa 0.4824, Z = 43.25, Prob>Z 0.0000
level 3: kappa 0.3715, Z = 50.97, Prob>Z 0.0000

Code 2
level 1: kappa 0.8215, Z = 17.19, Prob>Z 0.0000
level 2: kappa 0.7197, Z = 33.14, Prob>Z 0.0000
level 3: kappa 0.6630, Z = 59.64, Prob>Z 0.0000


Is there a suggestion out there for a simple summary of these results, and whether I should be performing a further statistical test to somehow compare the two sets of kappa results? Is it reasonable to state that code 2 has a higher agreement than code 1 based on these results?

In addition, the third level of the code has a few cells with very small values in them (<5), and three of the outcomes for code 1 at level 3, and one for code 2 at level 3, have a negative value for their individula kappa statistic. What does this mean, and should I not be doing this test if the cell size is smaller than a certain number?

Thanks for any comments on this,
Meredith Sydney


*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/




© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index