Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down at the end of May, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Nick Cox" <n.j.cox@durham.ac.uk> |

To |
<statalist@hsphsun2.harvard.edu> |

Subject |
RE: st: RE: Interpretation of quadratic terms |

Date |
Tue, 9 Mar 2010 12:19:36 -0000 |

Looking at residuals too is a good idea, but what I had in mind was just plotting the predicted. Here's a dopey example (in this case the quadratic is not a good idea, but the example is just to show technique). sysuse auto gen weight2 = weight^2 logit foreign weight weight2 predict predict scatter foreign weight || mspline predict weight, bands(200) The last two commands could be replaced by regplot where -regplot- can be downloaded from the -modeldiag- package on SJ. gen weight2 = weight^2 logit foreign weight weight2 The second and third commands, repeated just above, could be combined in Stata 11 using factor variable notation. logit foreign weight c.weight#c.weight OR logit foreign c.weight##c.weight Nick n.j.cox@durham.ac.uk Rosie Chen Nick, thank you for the guidance. The model I am estimating is a logistic regression. What I did to check the plot was to save the residual of the model, and then plotted the standardized residual against the predictor. I didn't really found a curve-linear relationship. Is there anything wrong with the way I plot the residual? If not, then why the inclusion of a quadratic term actually improves the model fitting when I made a model comparison using the -2log-likelihood? In addition, the nonsignificant predictor in the original form turned to be significant after using the quadratic term? Your further advice would be appreciated. Nick Cox <n.j.cox@durham.ac.uk> I don't know what kind of guidance you need, but the first step is surely to plot this curve and think about its substantive interpretation within the entire range of the data. That should include bringing in whatever science is behind this analysis. Rosie Chen I have a question regarding how to interpret quadratic terms in regression, and would appreciate your help very much. Because the non-linear nature of the relationship between X and Y; I need to include quadratic terms in the model. To avoid multicollinearity problem with the original variable and its quadratic term, I centered the variable first (X) and then created the square term (Xsq). The model with the quadratic term (Xsq) was proved to be significantly better. Suppose the output is like the following (both coefficients are significant), how to interpret the results? The two signs are opposite. y= a + 1.3*X - 0.2*Xsq + e * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**st: Interpretation of quadratic terms***From:*Rosie Chen <jiarongchen2002@yahoo.com>

**st: RE: Interpretation of quadratic terms***From:*"Nick Cox" <n.j.cox@durham.ac.uk>

**Re: st: RE: Interpretation of quadratic terms***From:*Rosie Chen <jiarongchen2002@yahoo.com>

- Prev by Date:
**st: Op. sys. refuses to provide memory - a cautionary tale** - Next by Date:
**st: AW: Op. sys. refuses to provide memory - a cautionary tale** - Previous by thread:
**Re: st: RE: Interpretation of quadratic terms** - Next by thread:
**st: RE: Interpretation of quadratic terms** - Index(es):