Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: RE: new package margdistfit available on SSC


From   Maarten Buis <[email protected]>
To   [email protected]
Subject   Re: st: RE: new package margdistfit available on SSC
Date   Fri, 18 Nov 2011 17:31:04 +0100

On Fri, Nov 18, 2011 at 4:43 PM, Austin Nichols wrote:
> This is an interesting exercise, though I would think only relevant
> for ML since no theoretical distribution is assumed for OLS etc.

Sure, I initially wrote it for -betafit- (available from SSC), which
fits a beta distribution with ML. The main reason for including linear
regression is didactic, more people are familiar with linear
regression and the normal distribution than with beta regression and
the beta distribution. As you remarked, there is a risk attached to
that strategy in that users may over-interpret the graphs in case of
linear regression. Though I do believe that even with linear
regression it allows for a useful view on the data and the model in
that the normal distribution is a useful baseline. Deviations from it
can point to interesting, unusual, disturbing or puzzling patterns in
the data. I find it often useful to know that such patterns exist in
my data even though I do not need to do anything about it.

> Minor points:
>
> 1. A parametric regression typically does not allow parameters to
> change as X changes, contrary to your text describing the command:
<snip>

I see how what I wrote could be interpreted in the way you interpreted
it. However, when wrote that I was thinking in terms of a distribution
rather than regression, and the parameter in that case is the mean or
standard deviation or some other parameter e.g. the scale or shape
parameters in the beta-distribution and not the regression parameters.
I need to make that more clear in my text.

> 2. What effect do heteroskedasticity or clustering of errors have on
> your examples?   Must you assume i.i.d. errors?

Heteroskedasticity can be accommodated if it is explicitly modeled.
For example, in -betafit- one can let the variance depend on
covariates by adding those covariates in the -phivar() option. When
one has asked for robust standard errors (and thus also in case of
clustered standard errors), one has already relaxed the distribution
assumptions. So in that case the theoretical distribution with which
the empricial distribution is compared only represents a useful
baseline instead of a hard assumption. I have to think a bit on the
consequences of clustering.

> 3. The link to "helpfile" at http://www.maartenbuis.nl/software/margdistfit.html
> pointing to http://repec.org/bocode/m/margdistfit.html
> seems to be broken.

thanks, I will look into it.

Thanks for your comments,
Maarten

--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany


http://www.maartenbuis.nl
--------------------------

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index