Statalist The Stata Listserver


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: Re: Too many sample points in microeconometric analyses?


From   evan roberts <[email protected]>
To   [email protected]
Subject   st: Re: Too many sample points in microeconometric analyses?
Date   Thu, 26 Apr 2007 08:38:37 -0500

Along with all the other useful replies to Karsten Staehr's question, these papers by McCloskey might be useful reading for anyone interested in this topic.

McCloskey, Deirdre N., and Stephen T. Ziliak. "The Standard Error of Regressions." Journal of Economic Literature 34, no. 1 (1996): 97-114.

McCloskey, Donald N. "The Loss Function Has Been Mislaid: The Rhetoric of Significance Tests." American Economic Review 75, no. 2 (1985): 201-05.

The basic advice is to focus on substantive relationships and effects, not p-values.

Evan Roberts


I have discussed with a co-author whether datasets used for microeconometric
analyses can be "too large" in the sense of comprising "too many"
observations? With a very large sample size (e.g. over 10,000 observations),
very many estimated coefficients tend to be significant at the 1%-level. My
co-author argues that such datasets with very many observations lead to
"inflated significance levels" and one should be careful about the
interpretation of the estimated standard errors. He suggests reducing the
sample size by randomly drawing a smaller sample from the original sample.
My questions are: 1) Can sample sizes be "too large" leading to too small
standard errors? 2) Do anybody have a reference to papers discussing this
issue? 3) Could it be related to possible misspecification problems of the
model?

Karsten Staehr
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index