[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Trend test in meta analysis
"Jayaprakash, Vijay" <Vijay.Jayaprakash@roswellpark.org>
RE: st: Trend test in meta analysis
Sat, 22 Dec 2007 14:09:52 -0500
Hi Tom, Austin & Ben,
Thanks for the reply. The comments and the references mentioned were useful. Being a student trying out meta-analysis by myself, it was really useful information.
Thanks again and a happy holidays to all of you
From: firstname.lastname@example.org on behalf of Tom Trikalinos
Sent: Sat 12/22/2007 11:34 AM
Subject: Re: st: Trend test in meta analysis
Meta-analysis of observational epi studies is very often performed. I
do not see it as a fundamental problem as long as the analyst is
cognizant of the pertinent issues and careful in the interpretation of
any findings. It is the analysts' duty to interpret the results of a
synthesis (or a meta-regression or any other exploration) critically -
quite in the spirit of your comments.
Providing a grand mean is not the sole reason for meta-analysis. In
fact, in some cases it may even be misleading. Other goals of equal
importance is to explore and describe between-study heterogeneity, and
to try to identify traces of systematic errors at the study level and
at the "scientific field" level.
That being said, meta-analytic techniques may be invaluable tools for
empirical research. Some have made an enviable career demonstrating
things many methodologists believed or anticipated on theoretical
grounds - but noone had actually *shown* before.
As for the technical issue... the papers in the previous e-mails and
the ones cited in them have dealt with the problem extensively... your
sign test idea or some modification thereof is definitely interesting
On Dec 21, 2007 9:23 PM, Austin Nichols <email@example.com> wrote:
> Vijay, Tom, and Ben:
> I have read neither of the references cited by Ben and Tom, but it
> seems to me a more fundamental problem is that the meta-analysis
> includes observational studies (I assume). The principle of
> meta-analysis is to combine several low-precision unbiased estimates
> to get a high-precision unbiased estimate; if every study exhibits a
> positive bias, can one even consider a meta-analysis? In that case,
> some kind of sensitivity testing à la Rosenbaum's _Observational
> Studies_ seems in order.
> That said, I am wondering if some kind of sign test (see -help
> signtest-) might not be a good way forward, e.g. count the number of
> studies with s1>0 (where s1 is the coef on category 1, low dose
> smoker, with the reference category nonsmoker represented by the
> zero--could also omit comparisons with zero) and s1<0, those with
> s2>s1 and with s2<s1, those with s2>0 and s2<0, etc. Under the null of
> no trend, all of these events have probability one half. A similar
> approach would be to use -nptrend oddsrat, by(category)-, I think.
> Does that make sense in this context? This is very far afield from my
> own field...
> On Dec 21, 2007 4:59 PM, Tom Trikalinos <firstname.lastname@example.org> wrote:
> > Hi vijay
> > I'm not clear what form your data have.
> > Let me say right off the bat that such a question (dose-response)
> > could be evaluated on a qualitative basis (with a plot) without giving
> > a p-value. This should not be discarded as an option, given the strong
> > assumptions that are inherent in the data-abstraction process during
> > meta-analysis.
> > That being said see if the paper by Jesse Berlin et al.
> > "Meta-analysis of epidemiologic dose-response data." Epidemiology.
> > 1993 May;4(3):218-28. PMID: 8512986 helps.
> > Two general comments
> > A. first summing up the ORs 1 to 10 and then seeing for a trend (e.g.
> > with a meta-regression) is subject to Simpson's paradox. This is why
> > you need an approach like the on described in the cited paper.
> > B. I'm not sure how metap could help you. -metap- does a
> > meta-analysis of significance levels, a whole family of non-parametric
> > meta-analysis methods (metap implements 3 of numerous approaches).
> > This is essentially an omnibus test and therefore does not have the
> > interpretation you would wish. A meta-analysis of p-values asks is
> > there evidence of significant deviation from the null in AT LEAST ONE
> > of the studies? A typical misinterpretaion of metap results is that
> > this is the p-value for the overall summary effect.
> > hope these thoughts help
> > tom
> > On Dec 21, 2007 3:52 PM, Jayaprakash, Vijay
> > <Vijay.Jayaprakash@roswellpark.org> wrote:
> > > Hi Stata users,
> > > I'm trying to do a meta analysis of data from 10 different studies. The smoking variable in each study is stratified into categories (Category 1:1-10 cigarettes, Category 2: 10-20 cigs, Category3: 20-30 cigs etc.). I calculated the OR for each category (of smoking) by study. I then did a meta analysis and calculated the OR for each category by random effects model using the meta command:
> > > meta or1 lcl1 ucl1, ci eform
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
This email message may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for the delivery of this message to the intended recipient(s), you are hereby notified that any disclosure, copying, distribution, or use of this email message is prohibited. If you have received this message in error, please notify the sender immediately by e-mail and delete this email message from your computer. Thank you.