Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Missing Wald test with -cluster- option


From   Austin Nichols <[email protected]>
To   [email protected]
Subject   Re: st: Missing Wald test with -cluster- option
Date   Mon, 28 Nov 2011 14:17:15 -0500

John Antonakis <[email protected]>:
Mark and I promulgated simulation evidence in the 2007 UK meeting:
http://repec.org/usug2007/crse.pdf
that one coefficient with M clusters has a very good rejection rate
when you include fixed effects for clusters and use the cluster-robust
SE.

We also said
"Preliminary simulations show that the rejection rate rises from 5
percent to 100
percent as the number of coefficients increases from 1 to M."
and we fully intended to publish a paper right way.  Life has
intervened, but there will be a paper (with a lot more new material)
one of these days.

Let me just interpret that finding in words, though.  If you have 50
clusters and you are testing one coef, you have 49 df, if the
clustering produces a maximal loss of information.  If you are testing
25 coefs, your rejection rate is not going to be anywhere near the
nominal rate, and if you are testing 51 coefs, you have a singular
variance matrix and the test will not work.  Of course, a test of the
"model" with has three substantive coefficients and 49 dummies for
clusters will not fly, but each one of the 3 coefs should have a
pretty good SE and tests involving those 2 of those 3 coefs will only
slightly understate true variability of estimates.

But YMMV--run your own simulation on your data!  (Imputing true
effects etc.)  Or just cite the 2007 presentation.  It reads as an
elliptical working paper, anyway.

On Mon, Nov 28, 2011 at 9:46 AM, John Antonakis <[email protected]> wrote:
> Hi:
>
> Awhile back, Mark Schaffer replied to a post, regarding a missing Wald test
> when using the -robust- or -cluster(id)- option, noting:
>
> http://stata.com/statalist/archive/2006-09/msg00840.html
>
> Mark suggested that tests of individual parameters are still interpreted;
> however, joint tests (for all parameters) should not (as I guess that the
> test would not be trustworthy). If I remember correctly, he has mentioned
> this in the past as have other posters too, directly or indirectly.
>
> What I was hoping for is a published referenced the fact that parameter
> tests are interpretable. Also, just wondering whether anyone has written
> anything the consequences of this problem (i.e., insufficient clusters) on
> tests of overidentification.
>
> Best,
> J.
>
> --
> __________________________________________
>
> Prof. John Antonakis
> Faculty of Business and Economics
> Department of Organizational Behavior
> University of Lausanne
> Internef #618
> CH-1015 Lausanne-Dorigny
> Switzerland
> Tel ++41 (0)21 692-3438
> Fax ++41 (0)21 692-3305
> http://www.hec.unil.ch/people/jantonakis
>
> Associate Editor
> The Leadership Quarterly
> __________________________________________
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index