Statalist The Stata Listserver

[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: types of standard error

Subject   Re: st: types of standard error
Date   Tue, 23 May 2006 18:09:03 +0100

I have heard of the OIM and EIM as well as bootstrap and jack-knife, and 
the robust, although I'd like to learn more about their relative 
advantages. Therefore if there's a simple treatment on the whole topic, 
that would be handy. Especially it seems few have heard of the other ones: 
OPG (Outer Product Gradient), HAC, one-stepped jack-knife and unbiased 
sandwich. For example, is the 'unbiased sandwich' to be generally 
preferred over the normal sandwich (robust)? 

I feel it might be nice if somebody can write an article on Stata 


Phil Schumm <> 
Sent by:
23/05/2006 17:57
Please respond to


Re: st: types of standard error

On May 23, 2006, at 9:03 AM, wrote:
> For example, in glm, we have the option of using:
> OIM, EIM, OPG, HAC, jacknife, one-stepped jacknife, unbiased 
> sandwich...

If you really want to learn more about the differences between these 
estimators, that's great; be forewarned, however, you're not going to 
get very far without delving into some of the math.  The distinction 
between the expected information (EIM) and observed information (OIM) 
is an important one, and comes out of basic likelihood theory (any 
basic book on statistical theory will discuss this).  In some cases 
(e.g., generalized linear models with canonical link), they are 
equivalent.  A classic paper comparing the two is "Assessing the 
accuracy of the maximum likelihood estimator: Observed versus 
expected Fisher information" (Efron and Hinkley, Biometrika (65) 
1978).  This paper argues that the observed information is in general 
more appropriate, which (I presume) is why -glm- produces this by 

The robust (sandwich) estimator is an attempt to relax the model 
assumptions on which the standard calculation is based.  For a simple 
introduction (with examples), see "Model Robust Confidence Intervals 
Using Maximum Likelihood Estimators" (Royall, International 
Statistical Review (54) 1986).  Also, as I said before, see [U] 20.14 
and the references cited therein.

Efron and Tibshirani's book on the bootstrap (the one I suggested 
earlier) actually treats both the bootstrap and the jackknife in 
detail, and discusses the relationship between them and several other 
estimates of variance (e.g., expected and observed information, 
sandwich, etc.).  In this respect, it is one of the better (and more 
accessible) treatments of the differences among the various estimators.

Finally, if you're primarily interested in this issue WRT -glm-, I 
believe Hardin and Hilbe's book (
glmext.html) has a good discussion of the differences between the 
various estimators available.

Keep in mind, more choice (and the ease with which the different 
estimates may be obtained and compared) is a very good thing.  You 
can never go wrong by trying the different estimators to verify that 
they give similar results (which they often do).  In cases where the 
results are substantially different, however, you need to be careful 
picking one over the others.  And as always, they justifications for 
many of these estimators rely on asymptotic arguments, and therefore 
you should always be careful when applying them to data from small 

-- Phil

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index