Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: RE : Heteroskedasticity and fixed effects (was: st: RE: Re: Weak instruments)


From   "Schaffer, Mark E" <[email protected]>
To   <[email protected]>
Subject   RE: RE : Heteroskedasticity and fixed effects (was: st: RE: Re: Weak instruments)
Date   Thu, 17 Jul 2008 17:43:26 +0100

Maarten,

> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of 
> Maarten buis
> Sent: 17 July 2008 16:58
> To: [email protected]
> Subject: RE: RE : Heteroskedasticity and fixed effects (was: 
> st: RE: Re: Weak instruments)
> 
> I was responding to the statement that "In practice, it just 
> makes more sense to always use robust standard errors." I 
> would guess that your category 3 is a relatively rare 
> category and would certainly not warrant such a general 
> advise.

If you mean this literally, then all of us empiricists are in deep trouble, since it implies that we are usually in category 1 or 2, and I would guess you're impling that category 2 is also relatively rare, so by implication, we're usually in category 1 - the model is seriously wrong.  But I think category 3 is pretty common.

Often we start out with a model in category 1 (the model has some serious problems).  We work on it; sometimes we never get out of category 1; sometimes we get all the way to category 2 (we think we've got it right, including the form of the var-cov matrix); and sometimes we get to category 3 (we think the model is mostly OK but lack confidence in the classical var-cov matrix).  (I'm ignoring another category, namely we think the model is mostly OK but we lack confidence in some other aspect of it beside the var-cov matrix.)

Indeed, sometimes we're in category 3 but we're quite sure we can't get to category 2.  An example is when we have a linear model, time series data, heteroskedasticity and autocorrelation, we don't know the exact form, and we also don't believe we have strict exogeneity.  GLS will be inconsistent, so we're stuck with OLS.  We can't use classical SEs, we can't do GLS, but robust SEs are consistent.  (See Hayashi 2000, Econometrics, pp. 415-417.)

The general motivation behind the advice seems reasonable - why assume something that you don't have to assume, if making the assumption is potentially costly and relaxing the assumption is nearly costless?  Of course, this doesn't excuse us from the responsibility of giving our model "a good long look", but we should do that anyway.

Cheers,
Mark

> Basically, my point is a variation on the advise that 
> whenever one thinks one has found heteroskedasticity one 
> should first take a good long look at whether the model is 
> correctly specified, instead of directly jumping at wls, 
> robust, or other such methods.
> 
> -- Maarten
> 
> --- Gaul� Patrick <[email protected]> wrote:
> > > > In both cases where is the harm in using robust standard errors 
> > > > and  what's the point to test for heteroskedasticity?
> 
> --- Maarten buis 
> > > The harm comes from making people feel more secure about their 
> > > results than they should be. The point made by Freedman 
> is that it 
> > > is not going to do them any good, but only the name 
> -robust- suggest 
> > > that they are somehow protected against all kinds of evils.
> 
> -- "Schaffer, Mark E" <[email protected]> wrote:
> > You don't mean this literally, right?  For example, if you think a 
> > linear model is reasonable and you want to use OLS, but you 
> don't want 
> > to rely on more assumptions than you really need, then using OLS
> > + heteroskedastic-robust standard errors (instead of OLS + classical
> > SEs) can't hurt and - if heteroskedasticity is actually present - 
> > could help.  This counts as "doing them some good", I think.
> > 
> > Or to repeat Patrick's points 1 and 2, and to make explicit the 
> > implicit point 3:
> > 
> > 1)  If the model is seriously in error, robustifiying will not help 
> > getting better estimates of the coefficients. Getting 
> standard errors 
> > right is irrelevant.
> > 
> > 2) If the model is nearly correct, robustifying makes virtually no 
> > difference
> > 
> > 3) If the model is mostly correct, but the assumption of 
> > homoskedasticity is implausible, undesirable, or unsupported, then 
> > robustifying helps.
> 
> 
> 
> -----------------------------------------
> Maarten L. Buis
> Department of Social Research Methodology Vrije Universiteit 
> Amsterdam Boelelaan 1081
> 1081 HV Amsterdam
> The Netherlands
> 
> visiting address:
> Buitenveldertselaan 3 (Metropolitan), room Z434
> 
> +31 20 5986715
> 
> http://home.fsw.vu.nl/m.buis/
> -----------------------------------------
> 
> 
>       __________________________________________________________
> Not happy with your email address?.
> Get the one you really want - millions of new email addresses 
> available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
> 


-- 
Heriot-Watt University is a Scottish charity
registered under charity number SC000278.


*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index