[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: RE: Calculating a "disagrement scale" from several individuall assessments
I am not sure whether this is quite the question,
but it may be worthwhile in any case.
In a paper on the analysis of agreement -- which despite
its title is applicable to all kinds of measured data --
I suggesting looking at a matrix of concordance correlations.
The eigenvalues and eigenvectors of that matrix might also
be of interest. That might be over the top for 3 observers,
but others watching might have more.
Naturally, I know the example below might not considered
measured, but I think this proposal is on all fours with
The paper is
Cox, N.J. 2006.
Assessing agreement of measurements and predictions in geomorphology.
Geomorphology 76: 332-346.
It covers a variety of graphical and numerical summaries.
> Kappa is one standard measure of disagreement among raters. A google
> search on "multiple raters" and "kappa" will turn up
> references which
> show that an intraclass correlation is equivalent to a generalized
> kappa statistic. Then a search on "intraclass" and "stata" will
> lead you to the appropriate stata command.
> Christer Thrane wrote
> > I have three variables: A, B, and C. They are individual
> > assesssments of the
> > same phenomenon on a scale from 1 (poor) to 5 (very good).
> > The data looks
> > like:
> > id A B C DS
> > 1 4 4 4 0
> > 2 3 4 4 1
> > 3 5 2 3 ?
> > 4 2 2 4 ?
> > and so on.
* For searches and help try: