Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down at the end of May, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
SURYADIPTA ROY <sroy9163@gmail.com> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
Re: st: r-square in -betafit- |

Date |
Sun, 20 Jun 2010 13:46:12 -0400 |

Dear Maarten and Nick, I had a couple of questions relating to my previous post (and your replies) on the goodness of fit relating to -betafit- and comparing it with -reg-, and would have very much appreciated your feedback. 1. Given that these are not nested models, is it justified to do an lrtest with the -force- option? e.g. the following are the results of my lrtest where A and B are the results from -betafit- and -regress- respectively: lrtest A B, force Likelihood-ratio test LR chi2(1) = 297.29 (Assumption: B nested in A) Prob > chi2 = 0.0000 On the basis of the above, is it justified to reject the OLS model in favor of the -betafit- model? 2. Can I compare the (log) likelihood values of the two regresssions and choose one over the other based on the (higher) value of the (log) likelihood functions? e.g. for -betafit- Log pseudolikelihood = 404.77487 and for OLS, e(ll) = 256.1320386923957. Thus, can we say that -betafit- fits better than OLS? 3. Can we directly compare the AIC and the BIC values for the two regressions given that they are not nested in one another? The AIC and the BIC values for the -betafit- model are ( -773.5497, -729.3715), and that for OLS are (-478.2641, -436.5402). Your advice and suggestions have been extremely helpful, and are highly appreciated. Thanks, Suryadipta. On Fri, Jun 18, 2010 at 8:10 AM, SURYADIPTA ROY <sroy9163@gmail.com> wrote: > Dear Maarten and Nick, > > Thank you so much for these invaluable comments and suggestions! > Regards, > Suryadipta. > > On Fri, Jun 18, 2010 at 4:35 AM, Nick Cox <n.j.cox@durham.ac.uk> wrote: >> I am another co-author of -betafit- (SSC) and author of the FAQ referred to. >> >> I see no great harm in computing a R-square measure as an extra descriptive measure. How useful and reliable it is will depend on the science of what you are doing and how far it makes sense as a summary, which is best judged graphically by considering a plot of observed vs fitted. >> >> Wanting to go further, if you do, in terms of formal inference with R-square would in my judgement be a bad idea. As Maarten indicates, the machinery supplied by -betafit- is superior for that purpose. >> >> Nick >> n.j.cox@durham.ac.uk >> >> Maarten buis >> >> --- On Fri, 18/6/10, SURYADIPTA ROY wrote: >>> The -betafit- option does not supply a value of r-square or >>> similar measure of goodnees of fit. >> >> It gives you the log likelihood, which means that for model >> comparison you can use likelihood ratio statistics or AICs >> or BICs. >> >>> I actually followed this FAQ: >>> http://www.stata.com/support/faqs/stat/rsquared.html >>> and implemented the procedure as suggested by Nick. Here >>> are the results: >>> It would have been very helpful to get some suggestions if >>> this procedure can be relied upon in this case, and if the >>> value of calculated r-square here can be compared with the >>> OLS r-squared (say). >> >> I would in that case rely more on comparing AICs and BICs >> (which are also available after -regress-) >> >>> Also, it would have been very helpful to get some help in >>> understanding the difference between the results for >>> -proportion- and -xb- following -predict- after -betafit- >>> since the mean of the linear prediction (xb = -5.38) is >>> found to be wildy beyond (0,1), while the mean of the >>> default (i.e. the proportion) is found to be very close to >>> the average value of the dependent variable (0.01 vs 0.007). >> >> What -betafit- does is model the mean dependent variable as >> invlogit(xb), xb is the linear predictor and invlogit(xb) is >> the predicted probability. invlogit(xb) is the function >> exp(xb)/(1+exp(xb)). So typically what you are interested >> in is the predicted proportion rather than the linear >> predictor. >> >> >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/statalist/faq >> * http://www.ats.ucla.edu/stat/stata/ >> > * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**Re: st: r-square in -betafit-***From:*Maarten buis <maartenbuis@yahoo.co.uk>

**Re: st: r-square in -betafit-***From:*Maarten buis <maartenbuis@yahoo.co.uk>

**References**:**st: r-square in -betafit-***From:*SURYADIPTA ROY <sroy9163@gmail.com>

**Re: st: r-square in -betafit-***From:*Maarten buis <maartenbuis@yahoo.co.uk>

**RE: st: r-square in -betafit-***From:*"Nick Cox" <n.j.cox@durham.ac.uk>

**Re: st: r-square in -betafit-***From:*SURYADIPTA ROY <sroy9163@gmail.com>

- Prev by Date:
**RE: AW: st: AW: Popularity of R, SAS, SPSS, Stata...** - Next by Date:
**RE: st: AW: Popularity of R, SAS, SPSS, Stata...** - Previous by thread:
**Re: st: r-square in -betafit-** - Next by thread:
**Re: st: r-square in -betafit-** - Index(es):