Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: Fw: statalist-digest V4 #1543


From   "Arnold Levinson" <[email protected]>
To   "Statalist" <[email protected]>
Subject   st: Fw: statalist-digest V4 #1543
Date   Sun, 7 Mar 2004 11:30:33 -0700

Karen,
One way to approach this problem is to investigate the magnitude of
influence of the complex-sample weights [pweight=varname] on model
coefficients. You can do this by running --logistic ..., robust
cluster[clusname]. (The robust/cluster option is needed only if you have
cluster-sampled data.) Run the model with and without [pweight], and compare
coefficients between weighted and unweighted models. If, as is often the
case, the two are close (no rule for "close", subjective judgment based on
the end-use of your model), you might consider using unweighted logistic
modeling to obtain an approximation of R^2.

This approach requires ignoring the influence of sampling strata (if any).
Stratified sampling usually reduces variance of descriptive statistics, but
I suppose the effect on regression models could depend on interaction of
strata with model variables, so you might want to test for this possibility.

Obviously, this strategy blends conventional statistical approaches with
subjective judgment. Is this acceptable? To me, the decision depends on the
context in which results will be used. Can you publish from the strategy?
I'm skeptical but would be interested in hearing what others think. Can you
use results to guide program or policy decisions? I'd be inclined to say yes
but again would be interested in other opinions.
Arnold H. Levinson, Ph.D.
Associate Scientist
AMC Cancer Research Center
Assistant Professor
University of Colorado School of Medicine
303.777-8801
[email protected]

> Date: Sat, 06 Mar 2004 03:05:54 -0800
> From: "Karen Matsuoka" <[email protected]>
> Subject: st: hypothesis testing/R2 with svy commands
>
> Dear Statalist:
>
> I'm trying to test a hypothesis by looking at how much more variance is
> explained (measured by increased R-squared) when I add test variables to
my
> model. Because I'm using a dataset with complex sampling weights, I need
to
> use the survey (svy) commands.
>
> So I am using the svylogit command for all my multivariate analyses on
> dichotomous outcome variables. But svylogit does not produce an R2 value.
Is
> there another command that will produce this R2 value for me? Or does
Stata
> have another measure--other than R2--that measures how much variance a
model
> explains when using complex survey data? (I believe the logit command
*does*
> return an R2 result, so I'm not sure why svylogit doesn't. Is there
> something about complex survey data that makes R2 values inappropriate?)
>
> My understanding is that the svytest command will not do this for me (i.e.
> it can used as a partial F-test to test the significance of individual
> variables or groups of dummy variables, but not for whole models).
>
> I've received all of my stats training in SPSS (I know, I know... boo
hiss!)
> so forgive me for my ignorance! I've looked over the manuals and the
> Statalist archives but can't find what I'm looking for.
>
> Thanks!
>
> Karen
> *
> *   For searches and help try:
> *   http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index