Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.

# RE: : st: sampsi for repeated measure t test

 From "Pesola, Francesca" To "statalist@hsphsun2.harvard.edu" Subject RE: : st: sampsi for repeated measure t test Date Thu, 3 Apr 2014 08:42:52 +0000

```Dear Paul,

Thank you very much for your reply and the quotation, very interesting!

I have run sampsi as you suggested - however, I was also asked to account for clustering but the ICC is 0.45 (very high)

I ran sampclus, rho(0.45) numclus(10) and sampclus suggests:

For this rho, the minimum number of clusters possible is: 41

So of course, the suggested N is rather large - :

n1 (uncorrected) = 89
Intraclass correlation     = .45
Average obs. per cluster   = 52
Minimum number of clusters = 41
Estimated sample size per group:
n1 (corrected) = 2132

I was asked to estimate the sample size but I am not 100% sure if accounting for clustering is necessary as this cannot be accounted for in a repeated t-test. Do you have a view on this?

Thanks,
Francesca

-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Seed, Paul
Sent: 02 April 2014 19:00
To: statalist@hsphsun2.harvard.edu
Subject: RE: : st: sampsi for repeated measure t test

Dear Francesca,

If I understand correctly, you have carried out a paired t-test, and noted the actual means and standard deviations.

Something like
ttest x1 = x0

If so, you will also have the mean and SD of the difference.
This is all you need.
Call them m_ and sd_

You can now carry out your post-hoc power calculation using
sampsi m_ 0, sd(sd_) onesample

However, I have to say it is a fairly pointless procedure.

My favourite quotation on this is getting a bit old now, but it is very much to the point.
And the article is worth reading.

"Although there is a growing understanding of the importance of statistical power considerations when designing studies and of the value of confidence intervals when interpreting data, confusion exists about the reverse arrangement: the role of confidence intervals in study design and of power in interpretation. Confidence intervals should play an important role when setting sample size, and power should play no role once the data have been collected, but exactly the opposite procedure is widely practiced. In this commentary, we present the reasons why the calculation of power after a study is over is inappropriate and how confidence intervals can be used during both study design and study interpretation."
```