Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Re: Degrees of freedom for xtmixed

From   Jordan Silberman <>
Subject   Re: st: Re: Degrees of freedom for xtmixed
Date   Sat, 22 Jun 2013 15:31:38 -0400

At least one reference (Hox's Multilevel Analysis: Techniques and
Applications, p. 46) states that the Raudenbush and Bryk approach to
degrees of freedom is conservative, particularly when compared to
asymptotic significance tests like those of xtmixed. This of course
makes sense because the p value associated with a z ratio is always
smaller than the p associated with the same t ratio, though the
difference btwn the 2 of course becomes negligible with large samples.

I certainly agree that it would be nice to have Kenward-Roger
approximations with xtmixed! Hopefully Stata will get around to this
soon. It seems to me that with Stata 13 they provide many advanced
options in terms of types of mixed effects models, but they neglect
the very basic issue of providing a df approximation that works for
small sample sizes.

The Monte Carlo approach you suggest sounds interesting, but for me it
would be prohibitively time consuming. And, whether or not it's fair,
I think many reviewers would not understand the Monte Carlo approach
you're suggesting.

Thanks for your response.


On Sat, Jun 22, 2013 at 4:50 AM, Joseph Coveney <> wrote:
> Jordan Silberman wrote:
> The following URL summarizes the simple formulas used to compute
> degrees of freedom in the HLM 7.0 software:
> Is there any reason why one can't use these formulas to assign df
> values to the fixed effects
> estimated by xtmixed? I realize that this approach might be
> conservative, which is fine for my purposes.
> --------------------------------------------------------------------------------
> Are they conservative?  For some reason, I had always believed that these kinds of simple degrees-of-freedom formulas tended to yield anti-conservative p-value estimates for fixed-effects contrasts.
> While we're waiting for, say, Kenward-Roger approximations, can't we simulate under the null to get an estimate of the quantile of the test statistic with reasonable precision?  How far astray would we be led using the observed values of the variance components (REML) as first-pass estimates in the simulation?  (And if it's worrisome, aren't there formulas that you can use to compute the worst-case bound for downward bias in these estimates for both REML and MLE, at least for common models?)
> With the small samples where this whole matter arises, and with Stata MP, -xtmixed- shouldn't be so prohibitively slow as to rule out Monte Carlo simulation to get a handle on the distribution of the test statistic under the null hypothesis for a given fixed-effects contrast, as long as you're not interested in something way out in the tails of the distribution.  Is there something that makes this approach impractical or invalid?
> Joseph Coveney
> *
> *   For searches and help try:
> *
> *
> *

*   For searches and help try:

© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index