Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: inter-rater agreement for multiple raters and continuous variables


From   Harrison Alter <[email protected]>
To   [email protected]
Subject   Re: st: inter-rater agreement for multiple raters and continuous variables
Date   Wed, 8 May 2013 17:31:41 -0700

Thank you Nick! You are correct that each chart was seen by only two
of the three abstractors. (They are volunteers for our shoestring
operation and do have a tendency to come and go.) I envisioned
reporting that the "ICCs for our chief variables of interest ranged
from 0.xxx to 0.xxx, suggesting [poor, fair, moderate, good,
excellent] agreement among our chart abstractors." Or if the results
ranged widely, being more specific. I will read up on -concord- and
repost if necessary.  I appreciate the guidance.

Gratefully,
Harrison Alter

On Wed, May 8, 2013 at 12:50 PM, Nick Cox <[email protected]> wrote:
> I don't understand several details here.
>
> How many variables do you expect to feed into a measure of agreement?
>
> If each "chart" was assessed by 2 out of 3 raters, there are no
> observations for which all three raters made an assessment. It seems
> to follow that all you can do is cycle through the pairs of raters and
> compare them.
>
> Once you have several variables, it strikes me that an omnibus,
> factotum or portmanteau measure of agreement is unlikely to be much
> use.
>
> -concord- (SJ) has been offered as a measure of agreement for
> (approximately) continuous variables. I guess if this were my problem,
> I would think about building up a matrix of concordance correlations.
>
> There is stuff in the concordance correlation literature on overall
> measures of agreement for several variables, but I've never been
> tempted to add it to -concord-.
>
> Nick
> [email protected]
>
>
> On 8 May 2013 00:52, Harrison Alter <[email protected]> wrote:
>> Hello Statalisters,
>>
>> I am working with Stata 11.2 on a mac platform.
>>
>> I have a dataset of 47 patients, each with 106 variables.  Each of
>> these patient's charts was abstracted by 2 raters, of a pool of 3: EL,
>> SH and RO.
>>
>> The data are in wide format, so that each observation looks something
>> like this:
>>
>> ID/Sex_EL /Sex_ SH/Sex_RO/DOB_EL/DOB_SH/DOB_RO
>> 1001/1/./1/11.8.1955/./11.8.1955
>>
>> The outcomes of interest are scores on a 16-point symptom scale, and
>> total drug dose in milligrams,10 outcomes in all, all continuous.
>>
>> I would like to calculate a measure of inter-rater agreement, focusing
>> on the outcomes of interest.  I understand that kappa works best with
>> dichotomous outcomes, and "loneway" seems to be appropriate for 2
>> raters.  I would like to perform some measure of intra-class
>> correlation.
>>
>> Questions:
>>
>> (1) Is this the right approach?
>> (2) What is the right analytic test?
>> (3) Can I do it in wide or must I reshape?
>>
>> Happy to provide any further details that might help in solving my problem.
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/faqs/resources/statalist-faq/
> *   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index