Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: Re: a reliability question


From   S�ren Nielsen <[email protected]>
To   <[email protected]>
Subject   st: Re: a reliability question
Date   Sun, 5 Dec 2004 17:26:57 +0100

DearStatalisters

I am 'a new kid on the block', but I think I might be able to help - here are two references that be might be usefiul to you

1 - Shrout PE (1998) Measurement reliability and agreement in psychiatry. Statistical Methods in Medical Research 7:301-317.
2 - Shrout PE, Fleiss JL (1977) Intraclass correlations: uses in assessing rater-reliability. Psychological Bulletin 86:420-428

The methods described have I been using in the following publication - Dyrborg et al. (2000). The Children's Global assessment Scale (CGAS) and Global Assessment of Psychosocial Disability (GADP) in clinical practice - substance and reliability as judged by intraclass correlations. European Child & Adolescent Psychiatry 9:195-201.

Sincerely  S�ren Nielsen
----- Original Message ----- 
From: "Suzy" <[email protected]>
To: <[email protected]>
Sent: Tuesday, November 30, 2004 8:11 PM
Subject: st: a reliability question 


> Dear Statalisters;
> 
> This question is statistical and not strictly a Stata software 
> question., although Stata will be used for the analyses...
> 
> I would like to know the most appropriate (basic/simple)  tests for 
> inter-rater and intra-rater reliability (one month later) given the 
> following scenario:
> 
> 3 raters are grading  a total of 9 bones. Three bones having been 
> drilled by three surgical residents. The grading instrument includes 15 
> core areas with a total of 24 individual items that receive a dichotmous 
> score, for a total of 36 potential points. For each of the 24 individual 
> items scored, either 1,2, or three points are given or zero points are 
> given. Thus, some items provide a larger proportion of the total score. 
> Nonetheless, the raters only get a dichotmous choice for each item. The 
> raters are grading to see whether certain aspects of drilling the bone 
> have been accomplished (eg. No Holes or Holes where No Holes=1 point; 
> Holes=0 points).  There is nothing qualitative in the assessment
> 
> Is the Kappa statistic the simplest and most appropriate reliability 
> assessement (for both intra and inter)? Is there a more appropriate test 
> that Stata can do? Also, can anyone comment on the sample size (this is 
> unfortunately, the largest sample size available).  Any further 
> comment,  advice or reference to a website for further details 
> etc...would be very welcome.
> 
> Thank you!
> Suzy
> *
> *   For searches and help try:
> *   http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/


*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index