Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[no subject]

For ADePT, it would make great deal of sense to pick up the
svysettings from whatever the existing data set might have. The
researchers the package is targeted at, in all likelihood, will not
have the technical sophistication to recognize the issues, but I would
expect that most data providers who release public versions of micro
data on the web would either have the svysettings embedded in their
data sets, or provide raw text files with .do-files to read them and
-svyset- them. So in all likelihood, the user will have an -svyset-
data set even without realizing it. Also, for ADePT, you'd want to
stick to one variance estimation method for everything, unless there
are very strong reasons for doing otherwise (so I can see where your
question is coming from). In terms of speed, -vce(linearized)- is
often the fastest, although -svy : mean, over( 100 categories)- may
actually run faster with few dozens BRR/jknife replications. A strong
reason for "otherwise" method of variance estimation is inconsistency
of -vce(linearized)- observed for Gini and some other weird

BTW, in your long listing of possible options and their combinations,
you are forgetting Fay's adjustment (for BRR weights) and delete-k
jackknife options. Theoretically, your data sets might have these
crazy things in them, too :)). There's also -mse- vs. variance option
that can go with BRR or jackknife, but I cannot imagine anybody
seriously interested in modifying it other than to reproduce somebody
else's results with 6 digits.

Stas Kolenikov, also found at
Small print: I use this email account for mailing lists only.
*   For searches and help try:

© Copyright 1996–2016 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index