[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: RE: st: Best method for imputing dichotomous variables

From   René Wevers <>
Subject   RE: RE: st: Best method for imputing dichotomous variables
Date   Fri, 15 Feb 2008 13:00:40 +0100 (CET)

I am with the dataset now and it seems that you were complete right that
it was a distribution problem. After logarithimc transformation and back
the results of imputation with -ice- seem fine.

Nevertheless I am left with one last issue on imputation. I also want to
impute a discrete variable, namely the age of companies in years
(integers) with a maximum of 37 years (age has only been measured as of
1967). The distribution is for this variable is definately not normal, but
it is not extremely skewed as well.
Can I manipulate my data such that I can apply standard (OLS) regression
to impute with -ice- or should I apply ordinal logistic regression in this

Again many thanks and greetings,


--- René Wevers <> wrote:
> The basis of the statement is twofold, indeed one reason is coming
> from the hard to explain results I got from -ice- for the
> variables I mentioned yesterday.

They are quite easily explainable (though you mentioned that you still
needed to check that): The distribtion of sizes of companies is never
going to be anywhere near a Gaussian (normal) distribution, however
strange your sampling scheme may be.

> However, another reason comes from a simple test I performed with
> -ice-. I randomly created missing values (25%) for a dichotomous
> variable where there were none missing and imputed these 'missing'
> values with -ice-. Afterwards approx. 700 out of 3000 imputed
> values proved to be different from the original values. When I used
> -impute- and rounded the results only 350 out of 3000 imputed values
> were different from the original values. Naturally this is a very
> weak test, but 700 out of 3000 'faulty' imputed values does not give
> me a lot of confidence in -ice- for my case.

I like simulations as a means of gaining understanding of statistical
techniques, and you and me are in good company: There is a working
paper by Stef van Buuren, Jaap Brand, karin Groothuis-oudshoorn, and
Don Rubin (who invented multiple imputation) that does a simulation
study of MICE (R and Splus), -ice- (Stata), and IVEWARE (SAS). (full
reference below)

In the past I have posted a number of simulations of -ice- on the

Neither Van Buuren et al. nor I could find something systematically
wrong with -ice-. The reason for the difference in our finding and your
finding is that you used the wrong criterium for success: multiple
imputation never claims to be able to recover not observed values, it
claims to be able (under the MAR assumption) to recover means,
proportions, variances, of variables and patterns of association
between variables. Counterintuitive as it may sound the "better"
performance of -impute- is actualy the result of the fact that is is
worse than -ice- (it ignores the uncertainty around the prediction).

Hope this helps,

Conditional Specification in Multivariate Imputation. Journal of
Statistical Computation and Simulation, in press. Simulation study on
the MICE algorithm.

Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands

visiting address:
Buitenveldertselaan 3 (Metropolitan), room Z434

+31 20 5986715

Sent from Yahoo! Mail - a smarter inbox

*   For searches and help try:

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index