Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: RE: St: Panel data imputation
Nick Cox <firstname.lastname@example.org>
RE: st: RE: St: Panel data imputation
Tue, 21 Sep 2010 14:59:41 +0100
I don't see how your problem statement leads to that conclusion. If you have just revenue and year, an answer will also depend on your economic understanding of how revenue varies with year. Also, if you have just revenue and year, it is as far as I can see very much an interpolation problem.
Conversely, if you have variables other than revenue and year, then quite what to do depends on the rest of your problem situation, and I think Maarten will join me in feeling unable to advise definitively without -- and probably even with -- further information.
Here as elsewhere there is an enormous jump between being able to make specific comments on techniques or Stata and being able to advise people what is the best thing to do on their projects, let alone what is "correct".
Thank you, Nick and Maarten, for the very detailed response. Very
helpful. Given the limitations of this command, it looks like that
multiple imputation would be the best approach to dealing with the
missing values. Am I understanding it correctly?
The straight answer to this question is that -- as the help for
clear -- there is an -epolate- option which you can use at your peril
to fill in
values at the ends of your series. This will work with panel data too,
sense that you will get what you ask for.
Note that -ipolate- is a command, not a function.
On the larger issue, raised by Maarten Buis, I hope we could all agree
interpolation, which has a centuries-old history, is not quite a kind
imputation, which is currently so fashionable as a species of
magic. (Naturally, your definition of imputation might be so wide that
interpolation is a special case; I would want to suggest that such a
definition will only lead to misunderstanding.)
I can see various advantages and disadvantages:
1. Interpolation is usually relatively simple to define. The linear
interpolation offered by -ipolate- certainly qualifies.
2. Interpolation is in various senses unstatistical, as
a. it takes account of at most local structure and works with data one
variable at a time.
b. it typically reduces variability, which distorts statistical
analysis to an
c. it is deterministic so is not accompanied by any estimate of error.
Clearly, this isn't a complete characterisation. Also it simplifies
I am at an extreme position within this list, as I have never used
but I have often used interpolation for gappy time series or spatial
no covariates. Such work has had as side-effects programs -cipolate-
-csipolate- on SSC.
If you are using interpolation I have some hackneyed pieces of advice:
* Get a feeling of how interpolation treats data like yours by
introducing gaps in good quality data and seeing how successful
at reproducing known values.
* Try different kinds of interpolation to get a sense of how far they
* Go very easy on the extrapolation.
This commentary steals one cogent remark made by Patrick Royston in a
conversation at the recent London users' meeting.
-ipolate- is generally not a good imputation method. Look at -help mi-
-findit ice- instead.
I have a panel data (year and revenue) and would like to use
ipolate function to impute the missing values for some years. What kind
of data will not be imputed if I use this method? It looks like that,
when I have missing values for the beginning year or the end of the
year, this method will not impute the missing values in these years. Is
there a way to deal with this problem?
* For searches and help try: