[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Too many sample points in microeconometric analyses?
Karsten Staehr wrote:
I have discussed with a co-author whether datasets used for microeconometric
analyses can be "too large" in the sense of comprising "too many"
observations? With a very large sample size (e.g. over 10,000 observations),
very many estimated coefficients tend to be significant at the 1%-level. My
co-author argues that such datasets with very many observations lead to
"inflated significance levels" and one should be careful about the
interpretation of the estimated standard errors. He suggests reducing the
sample size by randomly drawing a smaller sample from the original sample.
My questions are: 1) Can sample sizes be "too large" leading to too small
standard errors? 2) Do anybody have a reference to papers discussing this
issue? 3) Could it be related to possible misspecification problems of the
Classical/Fisherian theory requires that the best critical region for a test to adjusted
for the sensitivity of the test-- in particular, sample size. See, for example,
Pitman, 1965, "Some remarks on statistical inference," in "Bernoulli, Bayes, and Laplace,"
edited by Neyman and Le Cam, Springer.
Aitkin, 1991, Posterior Bayes Factors, Joural of the Royal Statistical Society, Series B,
One approach to addressing this issue is the Schwarz criterion/Minimal prior information
Posterior Odds criterion. See, for example,
Klein and Brown, 1984, "Model selection when there is 'minimal' prior information,"
Econometrica, (52), 1291-1312.
Leamer, 1978, "Specification Searchers: Ad Hoc Inference with nonexperimental data," John
Wiley & Sons.
Schwarz, 1978, "Estimating the dimension of a model," Annals of Statistics 6(2), 461-464.
Spiegelhalter and Smith, 1982, "Bayes factors for linear and log-linear models with vague
prior information," Journal of the Royal Statistical Society, Series B, 44, 377-387.
I hope these reference prove useful.
* For searches and help try: