Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: st: reliability with -icc- and -estat icc-


From   Lenny Lesser <[email protected]>
To   [email protected]
Subject   Fwd: st: reliability with -icc- and -estat icc-
Date   Tue, 26 Feb 2013 08:16:19 -0800

Thanks Rebecca,
With that code, I get the same problem when I eliminate one rater.

the var(rater) goes to zero, which makes my ICC 0, rather go up to a
higher number as I expected.


---------- Forwarded message ----------
From: Rebecca Pope <[email protected]>
Date: Tue, Feb 26, 2013 at 7:08 AM
Subject: Re: st: reliability with -icc- and -estat icc-
To: [email protected]


Lenny,
I don't think you've got the correct syntax for -xtmixed- if you are
trying to duplicate ANOVA results, which is the type of analysis that
-icc- appears to conduct (documentation is still limited, so I won't
swear to anything).

Use this syntax for -xtmixed-:
xtmixed rank i.Application || _all: R.Rater, reml var

-estat icc- is not a valid post-estimation command after this
specification. However, you can just use the definition that ICC =
Var(Rater)/(Var(Rater)+Var(Residual)).

You might also want to take a look at
http://www.ats.ucla.edu/stat/stata/faq/xtmixed.htm which will give you
instructions for using -xtmixed- to conduct ANOVA-type analyses (using
Stata 10, so you'll need to modify somewhat).

Regards,
Rebecca



On Mon, Feb 25, 2013 at 10:56 PM, Lenny Lesser <[email protected]> wrote:
> I have 4 raters that gave a score of 0-100 on 11 smartphone applications.
> The data is skewed right, as they all got low scores.  I'm using the
> ranks (within an individual) instead of the actual scores.  I want to
> know the correlation in ranking between the different raters.
>
> I've tried the two commands:
>
> -xtmixed rank Application || Rater: , reml
> -estat icc
>
> (icc=0.19)
>
> and
>
> -icc rank Rater Application, mixed consistency
>
> (icc=0.34)
>
> They give me two different answers. Which one is correct?
>
>
> Next, we found out that rater 4 was off the charts, and we want to
> eliminate her and rerun the analysis. When we do this we get wacky
> ICCs.  In the first method we get an ICC of 2e-26.  In the 2nd method
> (-icc), we get -.06.  Eliminating any of the other raters gives us
> ICCs close to the original ICC.  Why are we getting such a crazy
> number when we eliminate this 4th rater?
>
>
> I'm guessing this might be instability in the model, but I'm not sure
> how to get around it.
>
> Lenny
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/faqs/resources/statalist-faq/
> *   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index