Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: Re: Interrater reliability for 2 raters with multiple codes per subject


From   "Joseph Coveney" <[email protected]>
To   <[email protected]>
Subject   st: Re: Interrater reliability for 2 raters with multiple codes per subject
Date   Fri, 14 Feb 2014 12:29:25 +0900

Bert Jung wrote:

I am struggling to construct inter-rater reliability.  I have two
raters, each of whom can provide up to 3 codes per subject.  The codes
are categorical and range from 1 to 8.  The data look like this:

clear
set obs 2

input subject raterA_code1 raterA_code2 raterA_code3 raterB_code1
raterB_code2 raterB_code3
1 4 7 . 3 4 7
2 3 . . 3 . .

list


In this example, for subject 1, raters A and B agree on the codes 4
and 7.  Rater A only has two codes while B also used a third code, 3.
It seems that to calculate kappa one ought to acknowledge that the
codes are grouped within the two raters.

I thought this kind of setup would be common but could not find any
guidance.  Any thoughts warmly welcome.

--------------------------------------------------------------------------------

I assume that the categories are unordered, that they are up to a maximum of three qualities or attributes of the person (from a menu of eight such qualities or attributes) being detected by each of two raters who rate the person only once each, and not the degree of something about the person that is rated up to three times by each of two raters on an ordered-categorical scale that ranges from 1 to 8.  So, the three ratings are exchangeable--as you said, ratings 1 and 2 of rater 1 are equivalent to ratings 2 and 3 of rater 2 for the first person--and a rater cannot make the same rating twice for the same person.  

Have you considered multinomial probit model with a random effect for person?  I believe that you can fit such a model using the official -gsem- and perhaps the user-written -gllamm- (from SSC), as well.  You can then easily compute the intraclass correlation coefficient (i.e., kappa) in the usual manner for a subjects-random / raters-fixed model.

Joseph Coveney



*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index