Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: Re: Repeated Measures ANOVA vs. Friedman test


From   "Joseph Coveney" <jcoveney@bigplanet.com>
To   <statalist@hsphsun2.harvard.edu>
Subject   st: Re: Repeated Measures ANOVA vs. Friedman test
Date   Tue, 22 May 2012 21:52:14 +0900

Steve wrote:

I was going to compare some data from a pilot study where there were
repeated measures taken from subjects (1 measurement each at baseline,
2 weeks and 4 weeks).  I've got a small sample size (n=6 per group)
and the outcome of interest is a continuous variable.  My question is
whether I can use a repeated measures ANOVA to evaluate such a small
sample size or whether I should go with Friedman?  Or should I use
something else - a mixed model perhaps?

I did draw histograms and box plots to see what the distributions and
it looks more or less normally distributed but it's hard to really say
with such a small sample size.  Additionally, the sktest had a value
>0.05.  So is it okay to use RM ANOVA for n of 6 per group?

--------------------------------------------------------------------------------

You say "n=6 per group", which implies that you've got a split-plot design or
something related to it.  How were you going to use Friedman's test, unless your
hypothesis of interest is solely change over time pooled across both (all?)
groups?

I'd stick with parametric tests.

1. If your hypothesis is mainly about group differences, then you can sum across
times and do a t-test (one-way ANOVA) on the within-person sums.  Summing helps
normalize.

2. If your hypothesis is mainly about time differences, then do paired t-tests
against baseline.  Differencing helps normalize.  (I wouldn't worry about
multiple comparisons here--this is a pilot study after all.)

3. If your scientific interest is focused on a group-by-time interaction, then
rank-based tests become problematic, anyway.

In the spirit of a pilot study, I would not agonize over how the p-values are
affected by non-normality; the purposes are to get a sense of the quality of the
data (their properties, characteristics etc.), to decide whether there's any
point in going further, and if so to get an idea about sample size and whether
the study design is suitable as-is.  

As far as normality goes, I would graph* the residuals to scan for gross
deviations from normality as part of the background information to consider when
thinking about the design and inferential methods of the main study.

Joseph Coveney

*Type "help diagnostic_plots" in Stata's command line.  Residual plots are part
of getting a feel for the quality of the data.


*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index