Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: xtmelogit: comparing models


From   Joerg Luedicke <[email protected]>
To   [email protected]
Subject   Re: st: xtmelogit: comparing models
Date   Fri, 5 Oct 2012 11:42:08 -0500

Have you looked at the raw proportions across your 6 groups
(groupXcondition)? In your first model you constrain the effect of
group on resp to be the same across levels of condition, and you
constrain the effect of condition on resp to be the same within the
two groups. In your second model you relax these assumptions and allow
for different effects of condition across groups as well as the other
way around. Now, if you see a pattern in your raw proportions such
that you would find that the differences between groups are not the
same across conditions etc., then the less constraint model probably
provides a better represantation/summary of your data. I think it is
important to check which model makes the most sense first, based on
theory and what you know about the data, and worry about 'testing'
later...

If you do need a test here, I believe a likelihood-ratio test would be
straightforward here and would in this case relate to some kind of
omnibus test for the entire interaction term. If you feel you need to
do pairwise testing you could run -margins-* and then use the -test-
command with the -mtest- option to account for multiple comparisons.
The fact that the AIC/BICs point to the first model is probably a
result of the difference in log-likelihoods not being very large and
the fact that you get a penalty for 5 additional parameters.

Joerg

*Beware though, the -margins- command after -xtmelogit- (and
-xtmepoisson- for that matter) can only use the fixed effects
parameters for the predictions, and thus, the marginal predictions are
not averaged over the random effects. This in turn means that you do
not get 'real' population averaged margins.


On Fri, Oct 5, 2012 at 10:50 AM, Luca Campanelli <[email protected]> wrote:
> Dear Stata users,
> I’d like to fit and compare mixed effects logistic regression models with crossed random effects using the function xtmelogit (Stata 12IC for Windows).
>
> For example (“group” has 2 levels[0,1] and “condition” has 3 levels[1,2,3]):
> (1) xtmelogit resp i.group i.condition , || _all: R.item, covariance(id)  || sbj: , covariance(id)
> (2) xtmelogit resp i.group i.condition i.group#i.condition , || _all: R.item, covariance(id)  || sbj: , covariance(id)
>
> In comparing two models, I found a big discrepancy between lrtest on one side, and AIC-BIC on the other side. lrtest was highly significant, indicating that (2) was better than (1), while AIC and BIC values were clearly smaller for model (1).
> Which should I trust?
>
> Does this apply to my case http://www.stata.com/support/faqs/statistics/likelihood-ratio-test/  ?
> If yes, how can I do the Wald test?
> Would it be:
> test 1.group#2.condition 1.group#3.condition
>
> Is it correct? I saw others using testparm or lincom.
> I would appreciate any help to understand what the appropriate thing to do is.
>
> thank you,
> Luca
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/faqs/resources/statalist-faq/
> *   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index