Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Increasing variance of dependent variable, logit, inter-rater agreement


From   Steven Samuels <[email protected]>
To   [email protected]
Subject   Re: st: Increasing variance of dependent variable, logit, inter-rater agreement
Date   Fri, 27 Feb 2009 14:51:07 -0500

You have given very little information about these technologies and what they are measuring. I do not know much about this area, but have a few thoughts and questions for you to answer. If you do, perhaps others may be able to respond to your problem.

Please give more detail about what is being assessed. Is there a gold standard, measured or latent, for what these technologies are trying to agree upon?

* What is the first technology that measures characteristics and arrives at a pass-fail? How does it make this decision? Was age one of these characteristics?

* How was the cut point y2b arrived at?

* You say that the variability of y2a increases with age. Is the level of y2a related to age?

* "Agreement" is usually defined in two ways: 1) agreement on the marginal proportions of the outcomes; 2) agreement of individual decisions. The first is traditionally assessed by McNemar's test; the second by kappa. (See any book on categorical data).

Kappa corrects for chance agreement. The problem with measuring just the percentage of agreement, as you propose, is that it may be quite high by chance. Take two raters who independently make decisions with random numbers. Assume that each scores a "fail" with probability 0.20. The probability that they agree will be 0.68, purely by chance.

Kappa is not easy to model, but you could do it if you modeled all four cells of the 2x2 table of technology one vs technology two with a multinomial logistic or probit, followed by -nlcom-. However, I think that a jackknife or bootstrap confidence interval would be better. If age (perhaps through -fracpoly-) is a predictor, this might account for the age-related SD of the measurement.

-Steve


On Feb 26, 2009, at 8:30 PM, Supnithadnaporn, Anupit wrote:

Dear Stata List members,

I am trying to learn what factors might determine the inter-rater agreement between the two tests of the same subject using different technologies. One technology measures the characterisitcs of a subject and gives the result of
pass-fail (y1). The other technology gives the interval measurement
(y2a) and the pass-fail (y2b) can be obtained using certain threshold.

My simple idea is to use the logit regression of a dependent variable
called 'agree' which takes value 1 if both tests yield the same results
(y1=y2b) otherwise 0.

agree = f(age, x1,x2,x3,...)

The problem is that as 'age' increases, I know that the variance of y2a also increases. This is due to the nature (deterioration) of the subject itself. Given that the quality of the instrument measuring y2a is constant, how should I take into account the increasing variance of y2a in my model?


*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index