Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Discrepancy between metan vs metareg with one variable

From   Paul Karner <>
Subject   Re: st: Discrepancy between metan vs metareg with one variable
Date   Tue, 18 Dec 2012 12:08:52 -0500

Thanks for your reply.

They can be quite different -- in one instance by more than 10%.  I've
played around with the different options for specifying the method of
variance estimation and that does not resolve the large discrepancy.

Potentially related, I notice when calculating the test statistic for
the difference between effect sizes for two subgroups that the z-stat
from the meta regression is much smaller than the hand-calculated
z-stat I get [using the pooled standard error estimate
sqrt(se_group1^2 + se_group0^2)].  This is only true for
random-effects meta regression (i.e. the hand-calculated z-stat is
identical to that reported by "vwls").  Is this because the pooled
standard error on which coefficient test statistics are based in
random effects meta regression somehow also account for between-study
heterogeneity?  If so, does anyone know exactly how one would
replicate the z-stats reported by metareg, z by hand?

Thank you again for your insight!


On Tue, Dec 18, 2012 at 10:09 AM, JVerkuilen (Gmail)
<> wrote:
> On Tue, Dec 18, 2012 at 9:30 AM, Paul Karner <> wrote:
>> I have a question about Stata's (user-written) meta analysis functions.
>> When I run metareg with a single variable ("group1"), why is the
>> coefficient estimate on group1 slightly different from what I get when
>> I run metan, random, by(group1) and subtract the effect estimates for
>> the two subgroups (i.e., group1==1 and group1==0)?
> How different is different? The two methods may be using slightly
> different estimators as defaults, so for instance if one is using Der
> Simonian-Laird and the other REML, you're going to have differences.
> You need to check to see if you're estimating the model the same way
> and then compare. Even then they may well not line up perfectly if
> there are other slight differences in the programming between the two
> procedures, but they should be on par up to about four decimal places.
> *
> *   For searches and help try:
> *
> *
> *
*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index