Statalist The Stata Listserver

[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: Using weights in GLLAMM

From   "Stas Kolenikov" <>
Subject   Re: st: Using weights in GLLAMM
Date   Fri, 18 Aug 2006 13:49:44 -0500

With Canadian data, I would ask Milorad Kovacevic from Statistics
Canada as to what the procedure should be. Your data set should have
come up with level 2 weights if the latter are considered to be
relevant by the data collecting agency. The multilevel models, as far
as I know, are sensitive to the highest level weights, in terms of
getting both the point estimates and the standard errors right. As for
the level 1 weights, you basically tweak them to get good estimates of
your "within" estimates, such as the estimate of the level 1 variance;
see which is the
most famous paper on the topic; there's been some other approaches,
however, including Kovacevic's paper, which I don't have at hand.

On 8/18/06, Stefan Kuhle <> wrote:
Dear All,

I am working with a large dataset (n=35000) from a nationwide Canadian
survey. I would like to run a 2-level logit model with random slope using
GLLAMM (nesting individuals in their postal code area).

The frequency weights provided with the survey dataset would obviously be my
Level 1 weights. However, I don't quite understand what to use as the level
2 weights in this analysis. GLLAMM's default to set them to 1 doesn't seem
right to me in this case. Instead, I set them to the number of observations
in each postal code area (egen lev2wgt = count(xyz), by(pcarea)) in the
dataset but I'm not sure if this is correct either.

Any suggestions?



*   For searches and help try:

Stas Kolenikov
*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index