Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: Comparing regression coefficients


From   "Mahometa, Michael J" <michael.mahometa@ssc.utexas.edu>
To   "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject   RE: st: Comparing regression coefficients
Date   Thu, 28 Feb 2013 14:31:58 +0000

Hi all,

First - thanks for the responses so far.

Let me give a little more background.
I'm using the same subjects for both models. The dependent is a behavioral activation score (this stays the same in both models). I've also got some demographics in the model (again, these stay the same). My variable of interest is a genetic score:
In the first model, there are three indicators that go into the genetic score - in the second model there are 7 indicators that go into the genetic score (four new, and three that carry over).

The genetic score cannot be broken down into its component parts - otherwise, I would simply have a nested model and run a sequential regression.

My basic question is: Which genetic score (the 3 or the 7 indicator) does a better job of predicting behavioral activation. In this case "better" can be anything. I'd love to be able to say there is a significant increase in R2 (like I can for nested models), but I honestly don't know how to test that. I've looked at -suest- with a following -test- command but I don't know if that's appropriate (see completely made up code below):

Regress beh cov1 cov2 cov3 gs3
Est store m1
Regress beh cov1 cov2 cov3 gs7
Est store m2

Suest m1 m2
Test ([m1_mean]_b[g3] = [m2_mean]_b[g7])

Does that seem even remotely appropriate? I *think* I'm asking for a test between the two coefficients - but I honestly don't know (nor can I find) what the test is actually comparing (or how the comparison is made) so I can justify it.

Thanks,
Michael



-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of JVerkuilen (Gmail)
Sent: Thursday, February 28, 2013 6:28 AM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Comparing regression coefficients

On Wed, Feb 27, 2013 at 5:54 PM, Mahometa, Michael J <michael.mahometa@ssc.utexas.edu> wrote:
> Hi all,
>
> The *only* thing different is the use of variable2 over variable1 as a predictor in the model - same outcome, same covariates. Is there a way to compare the impact of variable1 to variable2? Why am I drawing a blank? Is it as simple as saying "the R2 is better for the second model, so variable 2 is better?"
>

I don't think there's an inherently obvious answer in this kind of situation, but that said I would suggest something such as:

(1) regress outcome, variable1, and variable2 each separately on the demographics and generate residuals
(2) Compare the R^2 of residual outcome on residual variable1 and on residual variable2.

So what this is doing is comparing the resulting squared partial correlations, having removed demographics. However, variable1 and
variable2 may be correlated too, so the R^2 from each regression still has common variance which you haven't accounted for.

(This assumes the relationships are linear, which they may not be.)



--
JVVerkuilen, PhD
jvverkuilen@gmail.com

"It is like a finger pointing away to the moon. Do not concentrate on the finger or you will miss all that heavenly glory." --Bruce Lee, Enter the Dragon (1973)

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index