Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | Nick Cox <n.j.cox@durham.ac.uk> |
To | "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> |
Subject | st: RE: calculating effect sizes when using svy command |
Date | Tue, 7 Dec 2010 20:36:18 +0000 |
On #2, and setting aside any -svy- complications: If you want a correlation go straight to -correlate-. In any case, the correlation is the same regardless of which of the variables is regarded as response or outcome. Nick n.j.cox@durham.ac.uk Vogt, Dawne I have two questions related to calculating effect sizes using svyreg (pweights): First, when doing unweighted regressions in SPSS, I like to provide effect sizes for each predictor by calculating a correlation coefficient value (r) from the t values provided in the output. I like using r because it is easy for most people to interpret. Can I do the same using svyreg output? My second question is related to the first. Since there is no correlation option under the svy commands, I have been computing regressions of Y on X and X and Y and using the largest p value of the two sets of results, as recommended elsewhere. I've having trouble figuring out how to convert the results provided in the output to a correlation coefficient though. I noticed that the r value I get by taking the square root of the R squared is different from my own hand calculation of r derived from the t value provided in the regression output [sqrt of (t squared divided by t squared + df). I'm not sure which r is correct (or if either of them are correct). Thanks in advance for any guidance others may be able to offer. * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/