Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: a reliability question


From   Stas Kolenikov <[email protected]>
To   [email protected]
Subject   Re: st: a reliability question
Date   Wed, 1 Dec 2004 10:09:48 -0500

Cronbach's alpha bears a number of very restrictive assumptions about
the properties of measurement instrument, and of course with your
discrete data of different variances those will be violated. Check
Bollen's book (1989) on the critique of alpha; I think he also gives
some suggestions for more appropriate reliability measures.

You might need to use my -polychoric- package that's downloadable from
my webpage; -findit polychoric- should give you a link.

You might also want to ask your question to a more specific social
sciences lists such as SEMNET -- which, unfortunately, is dominated by
three to five highly opinionated guys. There should be better lists,
but I am not that familiar with them.

Stas

On Tue, 30 Nov 2004 14:11:19 -0500, Suzy <[email protected]> wrote:
> Dear Statalisters;
> 
> This question is statistical and not strictly a Stata software
> question., although Stata will be used for the analyses...
> 
> I would like to know the most appropriate (basic/simple)  tests for
> inter-rater and intra-rater reliability (one month later) given the
> following scenario:
> 
> 3 raters are grading  a total of 9 bones. Three bones having been
> drilled by three surgical residents. The grading instrument includes 15
> core areas with a total of 24 individual items that receive a dichotmous
> score, for a total of 36 potential points. For each of the 24 individual
> items scored, either 1,2, or three points are given or zero points are
> given. Thus, some items provide a larger proportion of the total score.
> Nonetheless, the raters only get a dichotmous choice for each item. The
> raters are grading to see whether certain aspects of drilling the bone
> have been accomplished (eg. No Holes or Holes where No Holes=1 point;
> Holes=0 points).  There is nothing qualitative in the assessment
> 
> Is the Kappa statistic the simplest and most appropriate reliability
> assessement (for both intra and inter)? Is there a more appropriate test
> that Stata can do? Also, can anyone comment on the sample size (this is
> unfortunately, the largest sample size available).  Any further
> comment,  advice or reference to a website for further details
> etc...would be very welcome.
> 
> Thank you!
> Suzy
> *
> *   For searches and help try:
> *   http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
> 


-- 
Stas Kolenikov
http://stas.kolenikov.name
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index