Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Unbalanced repeated measures analysis question


From   K Jensen <[email protected]>
To   [email protected]
Subject   Re: st: Unbalanced repeated measures analysis question
Date   Thu, 22 Jul 2010 17:22:08 +0100

Thanks for this, David.

The design was that there were three sessions (say).  Each judge was
trained on one type of measurement (too expensive to do all!), so
there is only one measurement per judge per subject.  The gold
standard is the same and fixed for each subject.  Session is just an
organizational variable.  On one session, most judges measured most
(but not all) subjects there for that session.  Both subjects and
judges were completely different between sessions.

The hypothesis we are testing is that the mean of the difference from
the subject's gold standard is the same for all three measurement
types.  We are also interested in getting point estimates of these
mean difference and characterizing their SEs, adjusted for correlation
by judge and subject, to get CIs.

I will try to order books or see if we can budget for a statistician,
but I thought that this might be something I can do myself. I would
really like to try and would deeply appreciate any further help.

Thankyou

Karin

On 22 July 2010 16:47, Airey, David C <[email protected]> wrote:
> .
>
> I posted a message recently pointing to a book for
> non-statisticians that I like that covers mixed models:
>
> <http://www-personal.umich.edu/~bwest/almmussp.html>
>
> If I can reiterate your design:
>
> You have multiple sessions, let's say at least 3. At each
> session, the same N subjects return to be measured by a different
> set of judges, although you have the occasional missing subject
> at a given session. Each judge measures every subject on a given
> session with (a) one of 3 novel methods, and (b) the gold
> standard method. The endpoint measure is the difference between
> (a) and (b).
>
> It's not clear what your null hypothesis is, as testing
> equivalence is not the same as testing differences (e.g.,
> <http://www.graphpad.com/library/BiostatsSpecial/article_182.htm>).
>
> Design:
>
> Subject crossed with session: subjects return to each session
> Judge crossed with subject: on a given session, every judge
> measures every subject
> Judge nested in session: there are different judges for each
> session
> (?) Judge nested in method: on a given session, a judge uses only
> one method, plus the gold standard
> Method crossed with subject: on a given session, each subject is
> measured on all three methods etc.
>
> factors: subject, judge, method, session
>
>        subject and judge are random effects
>        method is a fixed effect
>        session is a fixed effect
>
> correlations/clustering/repeated measures:
>
>        subjects are measured repeatedly over sessions
>        crossed random effects (subject x judge)
>
> Before trying any analysis, I'd graph the data in any way
> possible that might help see what is going on, for example, by
> tying subjects together over session. I'd try to keep the graphs
> as atomized as possible to show as much off the raw data as
> possible, as well as making graphs of group means.
>
> If I were in your shoes, depending on the importance of the
> study, I'd take the design and graphs and a statement of the
> goals to a statistician, to understand what can be salvaged using
> mixed models, or if the analysis could be simplified by ignoring
> (collapsing over) some factors.
>
> Good luck!
>
>
>> Hi
>>
>> I have data on measuring a biological property for three different
>> methods plus a gold standard. Different people were trained in each
>> method (1,2 or 3) and measured the same subjects during different
>> sessions, together with the gold standard measurement.
>>
>> So the data look like
>> SubjectID MeasurerID MeasurerType Result GoldStandard  Diff
>> 1         1          1            95     99            -4
>> 1         2          3            102    99            +3
>> 1         3          2            92     99            -7
>> ...
>> 1        10          3            105    99            +6
>> 2         1          3             98   100            -2
>> ...
>>
>> Sometimes patients would be called in to see the consultant and so
>> missed for a particular measurer, but otherwise all the measurers
>> would measure all the patients seen in a particular session. Different
>> sets of measurers (but all trained by methods 1,2 or 3) were used on
>> each session (individual measurers 1-10 on session 1, 11-20 on session
>> 2 etc).
>>
>> The gold standard measurements on each session are roughly normally
>> distributed, as are the differences from the gold standard. We are
>> interested in the accuracy of each of the three methods.
>>
>> Is it OK to do some sort of repeated measures ANOVA here, with an
>> unbalanced design? If it is what would be the syntax (Stata 10)? Sorry
>> to sound pathetic but I just can't get the anova command with the
>> repeated option to work here.
>>
>> Is there a better measure to use than the difference to reflect the
>> fact that we are interested in a comparison with a gold standard?
>>
>> Thankyou
>> Karin
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index