[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: Random effect Kappa?

From   "Nick Cox" <>
To   <>
Subject   RE: st: Random effect Kappa?
Date   Mon, 15 Feb 2010 11:59:24 -0000

Like any single-number measure of agreement (association, correlation, etc.) kappa can conceal more than it reveals. With a data structure like this, kappa looks more like a distraction than an attraction. Suppose you got some lumped kappa. Would it really tell you much about the detailed structure of agreement or disagreement? Wouldn't it be better to model your data directly? 

See also advice in the Statalist FAQ on repeating questions. 


Eduardo Nunez, MD, MPH

No long ago I posted a question regarding how to pool individual kappa
statistics (each from the comparison between one rater and the answer
key scorer); I have 7 raters, then I need to pool these 7 kappas.
I am thinking to pool them using meta-analytical techniques, like
inverse weighting on the SD. Does anyone knows if such approach is
statistically valid?

Eduardo Nunez, MD, MPH

I am seeking your guidance on how to perform either random effect or
accounting for autocorrelation when estimating kappa statistics. I
need to figure out a way to perform this statistical analysis in
Stata. If not possible, I would appreciate any other software capable
of it.

These are the details and challenge I have before me:
1. 7 blinded scorers (dermatologists, plastic surgeon)
2. 188 photographs representing four facial compartments
3. 10 possible scores (0-9, none to severe laxity) for each photo by
each blinded scorer

The data for the first scorer looks like:
pte_id    photo_id    laxity_score(0-9)    facial_region(1-4)
1    1    8    3    7
1    25   3    1    3
2    4    0    4    0
and so on....

For the rest of the scorer (2 to 7), I have similar data structure.

1) Agreement (kappa) of each scorer with the lead author score (to
compare each of the blinded scorer's results against an "answer key"
provided by the lead author).
2) Agreement (kappa) for the comparison between each scorer (1 vs 2, 1
vs 3, 2 vs 1, etc).
2) A summary kappa integrating each kappa from 1),
3) A summary kappa integrating each kappa from 2),

1) Because one patient can contributes to several photos and to
different facial regions, I assume measures from the same pte are
likely to be correlated. How should I adjust for that? random effect?
2) Which is the best way to pool individual kappas into a grand summary kappa?

*   For searches and help try:

© Copyright 1996–2020 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index