Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Posthoc power analysis for linear mixed effect model


From   Joerg Luedicke <[email protected]>
To   [email protected]
Subject   Re: st: Posthoc power analysis for linear mixed effect model
Date   Sun, 9 Mar 2014 17:08:39 -0400

I would use Monte Carlo simulations here, see the reference that Jeph
provided for a nice Stata related introduction to simulation-based
power analysis. For your purposes, you could use -powersim- (from SSC,
type -ssc install powersim- in Stata to install it), but before you
use it, make sure to read the tutorial first
(http://fmwww.bc.edu/repec/bocode/p/powersim_tutorial.pdf). Example 5
demonstrates the usage of -powersim- for a multilevel model design.

Joerg

On Sat, Mar 8, 2014 at 8:55 PM, Mohammod Mostazir <[email protected]> wrote:
> Hi Jeph & Joerg,
>
> Thanks to both of you for your valuable comments and the valuable time
> you put into it. Perhaps Stata's 'simpower' does similar thing to what
> Jeph suggested and I can see Joerg has valid points too. Actually,
> behind the 'posthoc' issue, my intention of this question was to know
> about power analysis in mixed effect designs in Stata. Forget about
> the posthoc analysis. Say if you were to conduct the same study with
> 140 cases and you have provisions of 10 repeated measurements, how
> would you carryout  the power analysis in Stata given that you know
> your future analysis is going to be linear mixed effect designs and
> you have the age specific population BMI parameters in hand. One
> limitation certainly will be that the population parameters will be
> from different groups rather from repeated observations from the same
> group. Considering this limitation (trading off with the educated
> guess), what will be the Stata procedure for power analysis for such
> study.
>
> Thanks.
> Mostazir
> Research Fellow in Medical Statistics
> University of Exeter,
> Sir Henry Wellcome Building for Mood Disorders Research
> Perry Road, Exeter EX4 4QG
> United Kingdom
> Phone: +44 (0) 1392 724629
> Fax: +44 (0) 1392 724003
> web: http://www.exeter.ac.uk/biomedicalhub/team/mrmohammodmostazir/
>
>
>
>
>
>
>
>
>
>
>
> On 8 March 2014 00:43, Joerg Luedicke <[email protected]> wrote:
>>> *  Unless one calculates the curve as you have, one will not know
>>>    the power that corresponds to the p-value
>>
>> But what exactly could one learn from such values? For example, say we
>> have a p-value of 0.2 with "observed power" of 0.2, then we could
>> _not_ conclude that the test may have yielded an insignificant result
>> _because_ of low power. Likewise, some may argue that not only yielded
>> their test a significant result, their test was also strongly powered,
>> which is a similarly empty argument. Larger p-values always correspond
>> to lower "observed power" and the calculation of the latter does not
>> add _any_ information.
>>
>>> *  Most often, one wants to know the power to detect a true effect,
>>>    not the observed effect, in which case one cannot infer anything
>>>    from the observed effect or the p-value.
>>
>> I am not sure if I understand this. What often makes sense, however,
>> is to simulate data under a variety of assumptions and plausible
>> effect sizes, both pro- and retrospectively. For example, it can often
>> be very instructive to inspect expected distributions of parameters
>> (under certain assumptions and possibly over a range of plausible
>> effect sizes) with regard to things like the sign of the effect (e.g.,
>> with assumed effect size d under model m, and a given sample size n,
>> what would be the probability of an estimated parameter having the
>> wrong sign?), it's magnitude etc. which can help to put one's observed
>> estimates into perspective. Andrew Gelman & John Carlin call this
>> "design calculations" and as they put it: "The relevant question is
>> not, "What is the power of a test?" but rather, "What might be
>> expected to happen in studies of this size?"" (see:
>> http://www.stat.columbia.edu/~gelman/research/unpublished/retropower.pdf)
>>
>> Joerg
>>
>> On Fri, Mar 7, 2014 at 5:09 PM, Jeph Herrin <[email protected]> wrote:
>>> Yes, but:
>>>
>>> *  Unless one calculates the curve as you have, one will not know
>>>    the power that corresponds to the p-value; and,
>>> *  Most often, one wants to know the power to detect a true effect,
>>>    not the observed effect, in which case one cannot infer anything
>>>    from the observed effect or the p-value.
>>>
>>> No?
>>>
>>> Jeph
>>>
>>>
>>>
>>> On 3/7/2014 4:43 PM, Joerg Luedicke wrote:
>>>>
>>>> I'd recommend to not do that at all because a post-hoc power analysis
>>>> is a fairly useless endeavor, to say the least. The reason for that is
>>>> that the "observed" power, i.e. the calculated power that you obtain
>>>> by using the estimates from your model, is a 1:1 function of the
>>>> p-values of these estimates. Therefore, calculating post-hoc power
>>>> doesn't add any information to what you already have! See Hoenig &
>>>> Heisey (2001) for an account on this. Below is an example where we
>>>> repeatedly compare means between two groups and store the "observed"
>>>> power and p-value from each comparison, then plot power as a function
>>>> of p-value:
>>>>
>>>> * ---------------------------------
>>>> cap program drop obsp
>>>> program define obsp, rclass
>>>>
>>>> drop _all
>>>> set obs 200
>>>> gen x = mod(_n-1,2)
>>>> gen e = rnormal()
>>>> gen y = 0.1*x + e
>>>>
>>>> ttest y, by(x)
>>>> local p = r(p)
>>>> local m1 = r(mu_1)
>>>> local m2 = r(mu_2)
>>>> local sd1 = r(sd_1)
>>>> local sd2 = r(sd_2)
>>>>
>>>> power twomeans `m1' `m2' , sd1(`sd1') sd2(`sd2') n(200)
>>>> return scalar p = `p'
>>>> return scalar power = r(power)
>>>> end
>>>>
>>>> simulate power = r(power) p = r(p) , reps(100) seed(1234) : obsp
>>>>
>>>> scatter power p, connect(l) sort ///
>>>> ytitle(`""Observed" power"') ///
>>>> xtitle("p-value")
>>>> * ---------------------------------
>>>>
>>>> Joerg
>>>>
>>>> Reference:
>>>> Hoenig, M & DM Heisey (2001): The Abuse of Power: The Pervasive
>>>> Fallacy of Power Calculations for Data Analysis. The American
>>>> Statistician 55(1): 1-6.
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Mar 7, 2014 at 2:55 PM, Mohammod Mostazir <[email protected]>
>>>> wrote:
>>>>>
>>>>> Dear great stat-warriors,
>>>>>
>>>>> I need some Stata related H--E--L--P here. I have a dataset that has
>>>>> repeated BMI
>>>>> (Body Mass Index; continuous scale) measurements of 10 equally spaced
>>>>> annual time points from 140 cases. The interest is to observed change
>>>>> in BMI in relation to other time-constant and time-varying
>>>>> co-variates. The analysis I have carried out is linear mixed effect
>>>>> model using Stata's 'xtmixed' command with random intercepts and
>>>>> slopes.  Now I would like to carry out a posthoc power analysis to see
>>>>> how much power the study has. Is there any light in Stata in relation
>>>>> to this? I have seen Stata's ''power repeated'' command which does not
>>>>> suit here as they are suitable for one/two way repeated ANOVA designs.
>>>>>
>>>>> Any comment is highly appreciated. Thanks for reading.
>>>>>
>>>>> Best,
>>>>>
>>>>> Mos
>>>>> *
>>>>> *   For searches and help try:
>>>>> *   http://www.stata.com/help.cgi?search
>>>>> *   http://www.stata.com/support/faqs/resources/statalist-faq/
>>>>> *   http://www.ats.ucla.edu/stat/stata/
>>>>
>>>> *
>>>> *   For searches and help try:
>>>> *   http://www.stata.com/help.cgi?search
>>>> *   http://www.stata.com/support/faqs/resources/statalist-faq/
>>>> *   http://www.ats.ucla.edu/stat/stata/
>>>>
>>> *
>>> *   For searches and help try:
>>> *   http://www.stata.com/help.cgi?search
>>> *   http://www.stata.com/support/faqs/resources/statalist-faq/
>>> *   http://www.ats.ucla.edu/stat/stata/
>> *
>> *   For searches and help try:
>> *   http://www.stata.com/help.cgi?search
>> *   http://www.stata.com/support/faqs/resources/statalist-faq/
>> *   http://www.ats.ucla.edu/stat/stata/
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/faqs/resources/statalist-faq/
> *   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index