Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: RE: IRT with GLLAMM


From   "Joseph Coveney" <[email protected]>
To   <[email protected]>
Subject   st: RE: IRT with GLLAMM
Date   Thu, 5 Mar 2009 11:21:23 +0900

Have you tried simplifying the model, e.g., fitting a one-parameter IRT (Rasch)
model, to see whether you can achieve convergence?  If so, then use the fit as
starting values for the two-parameter IRT model.  If the condition number of the
Rasch model is not good, then empirical underidentification will be even worse
for the two-parameter IRT model.

If you have access to MPlus, you might give it a try.  Example 5.5 in the user's
manual is for a two-parameter IRT model with binary test items as I recall.

I must be misconstruing your description of the data: from your post, it seems
that there are test items with identical scores--e.g., several test items where
everyone got them all right or everyone got them wrong, or the one student who
got the first item wrong (right) is the only person who got the second and
subsequent items wrong (right).  If this is the case, then you might have better
luck if you discard all but one of these kinds of items.  It's been my belief
that multiple items with identical scores is trouble.

My only experience with Rasch and two-parameter IRT models is with
ordered-categorical responses, so I haven't looked into -xtmelogit-.  But, to my
knowledge, none of official Stata's mixed-model commands allows -constraints()-.
(It's been on my wish list since -xtmixed- first appeared.*)  Without the
ability to impose constraints, I'm not sure how you'd set up these kinds of
factor models.

Joseph Coveney

* The announcement yesterday of Bobby Gutierrez's course
http://www.icpsr.umich.edu/cocoon/sumprog/course/0042.xml piqued my interest:
"The theme of the third day can best be described as tricks of the trade,
covering . . . models with complex and grouped constraints on covariance
structures."

Stas Kolenikov:

My data set consists of students (CourseID variable), their test
questions (Question) and 0/1 indicator of whether they've answered the
question correctly. The data are in the long form appropriate for
GLLAMM. I am modeling the questions as fixed/parameters of the model,
and students as random factors. Here's what I have:

* generate dummy variables for questions
xi i.Question, noomit
* specify the equation for the random factor: the dummy variables from
the previous command
eq diff : _IQ*
* variance of the random factor identification
constr 55 [Cou1_1]_IQuestion_4 == 1
* call to gllamm: 2-parameter IRT
gllamm Correct _IQ* , fam( bin ) link( logit ) i( CourseID ) eq( diff
) nocons constr(55)

I am specifying -nocons- so that each question has its own intercept
(sensitivity times difficulty, in IRT terms), and the factor loadings
from -eq()- option should give me the sensitivity.

-gllamm-, however, has trouble converging. Does it have to do with
empirical underidentification? Do I need to search for a better
identifying variable? Question 4 above was the first on the list that
had any variability; everybody answered the first three questions. It
is probably not a terrific question to give identification, too: only
a couple people missed it. My sample sizes are not terrific, either: I
have about 40 students and about 30 questions. And there are lots of
easy questions that were missed by one or two or three students only.
If I have only one student who missed a question, then I probably
won't be able to identify two parameters for that question, right?

Finally, since we are talking about random effects logit in Stata, is
there any way to run this with -xtmelogit-? It should be faster, at
least.




*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index