Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Working with really large Datasets

From   "Justina Fischer" <>
Subject   Re: st: Working with really large Datasets
Date   Tue, 16 Oct 2012 01:07:54 +0200

Dear Fernando,

in case your data set is a 'population', not a sample - perfect. Significance levels are not relevant any more (they have no meaning). it is then sufficient to just report correlations/coefficients.

Some stata commands possibly suppress the calculation of s.e.

Good luck
-------- Original-Nachricht --------
> Datum: Mon, 15 Oct 2012 18:52:17 -0400
> Von: Fernando Rios Avila <>
> An:
> Betreff: st: Working with really large Datasets

> Dear stata listers,
> I wonder if any one here can share some experience on working with
> really large datasets. I m working with a panel dataset (census type
> of data) for workers and firms over time. The total number of
> observations is about 70 million. I want to estimate  two way fixed
> effects models, manually including dummies for regions time and
> industries. However with the size of the dataset, the results become
> unmanageable.
> Does anyone know or can direct me to an strategy to deal with "too much
> data"?
> I was thinking about obtaining random samples (say 5%), picking
> individuals at random, and keeping them along the whole time they
> appear on the sample, and then combining all the results, in a similar
> fashion as it is done with Multiple Imputation datasets. But im not
> sure how valid would that procedure be.
> Any suggestions are welcome,
> Thank you.
> Fernando Rios
> *
> *   For searches and help try:
> *
> *
> *
*   For searches and help try:

© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index