Statalist The Stata Listserver


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: st: RE: Hausman taylor


From   "Schaffer, Mark E" <[email protected]>
To   <[email protected]>
Subject   RE: st: RE: Hausman taylor
Date   Tue, 2 May 2006 23:32:35 +0100

Rodrigo, Julia,

> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of 
> Rodrigo A. Alfaro
> Sent: 02 May 2006 22:16
> To: [email protected]
> Subject: Re: st: RE: Hausman taylor
> 
> Mark
> 
> I rechecked my comments and found that you are right. GLS is 
> still a consistent (inefficient and noisy) estimator under 
> het/auto then your solution is valid. But let me compare the 
> procedures in word: (1) yours controls for the RE (w/wrong 
> weights) then applies IV to obtain the third-round 
> coefficients and uses robust std errors and (2) mine stops HT 
> at step 2 and uses robust std error (keeping in mind the fact 
> that IV coefficient were obtained in a second round). After 
> our discussion both procedures are consistent and efficient, 
> but numerically will give us different results in 
> coefficients as well std-errors. Very interesting. I have 2 
> more comments about your procedure: (1) it needs needs (as same as
> HT) some (extra) exogeneity in time-variant variables (to do 
> the last IV procedure) and (2) it generates some extra noise to the 
> variables computing a wrong GLS factor.

Rodrigo - we don't know which estimator is more "noisy".  It all depends
on the het./AC.  If, say, heteroskedasticity exists but is very small,
then HT will be almost efficient and probably better than stopping after
step 2.  If the heteroskedasticity is huge, then all bets are off and
stopping after step 2 is probably a better idea.  But we should continue
this off-list.

Julia - did this debate help??  This was your question to start with!

Cheers,
Mark

> 
> Rodrigo.
> PS: We can continue the discussion off the list if you want.
> 
> ----- Original Message -----
> From: "Schaffer, Mark E" <[email protected]>
> To: <[email protected]>
> Sent: Tuesday, May 02, 2006 12:35 PM
> Subject: RE: st: RE: Hausman taylor
> 
> 
> Rodrigo,
> 
> > -----Original Message-----
> > From: [email protected]
> > [mailto:[email protected]] On Behalf Of
> > Rodrigo A. Alfaro
> > Sent: 02 May 2006 16:12
> > To: [email protected]
> > Subject: Re: st: RE: Hausman taylor
> >
> > Mark,
> >
> > This is very interesting discussion. My point is that under
> > autocorrelation and/or heteroskedasticity you cannot generate
> > consistent estimator for variance of the error term,
> > therefore the GLS transformation applied in the last step of
> > original-HT is wrong. For this reason, I cannot see that the
> > coefficients of modified-HT can be consistent, based on that
> > in your suggestion is still using the wrong GLS
> > transformation.
> 
> I agree, this is interesting.  But I am pretty sure that the 
> HT coefficients 
> are consistent in the presence of het. or AC.  Here are two reasons:
> 
> 1.  The GLS transform used is a weighted average of the 
> within and between 
> estimators (HT, p. 1381).  A weighted average of two 
> consistent estimators 
> will be consistent (except perhaps in special cases constructed by 
> specialists, i.e., not me).
> 
> 2.  In the standard random effects estimator, in the presence 
> of het./AC, 
> you also cannot obtain a consistent estimator for the 
> variance of the error 
> term - just as you say for HT.  The GLS transform applied to 
> get the random 
> effects estimator is therefore "wrong" - but only in the 
> sense that it isn't 
> an *efficient* estimator.  It's still consistent.  That's why various 
> textbooks (e.g., Wooldridge 2002) point out that one can use the 
> cluster-robust covariance estimate to get consistent SEs for 
> the random 
> effects estimator even in the presence of het./AC.  The same 
> argument should 
> [sic!] apply to HT, no?
> 
> > Mind that original-GLS transformations uses
> > the variance of the residual as a scalar and now it is an
> > unknown matrix.
> >
> > As I said early, coefficients on the previous steps are
> > consistent, but inefficient. Indeed, the section 2.3 in the
> > paper is called "Consistent but Inefficient Estimation". I
> > think that the Julia's problem can be solved but keeping the
> > FE (time-variant variables) and IV (time-invariant variables)
> > coefficients and generating a non-parametric std error as
> > Newey-West procedure does.
> 
> This is a good idea.  Another way to put it would be to say 
> that the last 
> step of HT generates efficient estimators of the coefficients 
> only under 
> homoskedasticity.  If this assumption fails, then HT is 
> consistent but not 
> efficient (my point above).  In that case, the HT approach of 
> GLS loses its 
> main attraction, and so why bother doing it - just stop at 
> the previous 
> stage, with the within and between estimators.  Julia can do 
> this by hand.
> 
> Cheers,
> Mark
> 


*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index