Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: Multivariate Poisson - Correlation of Error Terms


From   "Joseph Coveney" <jcoveney@bigplanet.com>
To   "Statalist" <statalist@hsphsun2.harvard.edu>
Subject   Re: st: Multivariate Poisson - Correlation of Error Terms
Date   Sun, 13 Jul 2008 15:52:22 +0900

Austin Nichols wrote:

U.G. Narloch ---
-nlsur- estimates the equations jointly but -suest- just reestimates
standard errors (not point estimates). I gather from your question
that you want some postestimation tools for -nlsur- that don't
currently exist...

webuse nhanes2, clear
egen c=group(strat psu)
poisson iron black [iw=fin]
est sto p1
poisson lead black [iw=fin]
est sto p2
suest p1 p2, cl(c)
nlsur (iron=exp({b1}+{b2}*bl)) (lead=exp({b3}+{b4}*bl)) [pw=fin], vce(cl c)

--------------------------------------------------------------------------------

It seemed as if Ulf wanted to do Poissson regression.  It's my understanding
that you don't normally consider correlation of residuals from Poisson
regression, and that's why I suggested examining the presence of correlation
via a random effects (a.k.a., latent variable, factor) approach, such as
what would be the -gllamm- equivalent of something like the following Mplus
code:

VARIABLE:   NAMES = y1 y2 y3 y4 x z1 z2 z3 z4;
           COUNT = y1 y2 y3 y4;
MODEL:      y1 ON x z1;
           y2 ON x z2;
           y3 ON x z3;
           y4 ON x z4;
           f1 BY y1 y2 y3 y4;

where y's are the response variables ("COUNT =" stipulates Poissson
regression), x is a predictor common to all four equations (there could be
several x's in Ulf's case), and the z's are predictors peculiar to
respective equations (again, there could be several for each equation).  I
say "something like", because I suspect that the model is naive and could
probably be better specified after some reflection--constraining to zero
correlations between the z's across equations, for example--whether it's
identified is an exercise still outstanding.  (There might even be a short
cut through all of this via MPlus's ANALYSIS: TYPE = RANDOM statement; I'm
still getting up to speed.)

The thinking goes that, if correlation exists, then the random effect, f1,
will demonstrate a variance greater than zero (either by the Wald test
that's normally a part of the Mplus printout, or by a likelihood ratio test
against the reduced model).

If true multivariate distribution was of interest, then you could try for
four random effects and the correlations between them, if you can get an
identified model from that proposal.  Even so, that's a lot of integration
points just to see whether the four equations may be considered
independently.

Joseph Coveney

clear *
set more off
set seed `=date("2008-07-13", "YMD")'
set obs 200
generate double x1 = invnorm(uniform())
generate double x2 = invnorm(uniform())
generate double xb = -1 + 0.5 * x1 + 1.5 * x2
genpoisson y, xbeta(xb)
poisson y x1 x2, nolog
predict y_po, n
nl (y = exp({b0} + {b1} * x1 + {b2} * x2)), initial(b0 -1 b1 0.5 b2 1.5) ///
 nolog
predict y_nl, yhat
pause on
graph7 y_nl y_po y_po, xlabel ylabel connect(.L) symbol(oi)
pause
graph7 y_nl y_po y_po, xlabel ylabel connect(.L) symbol(oi) xlog ylog
signrank y_po = y_nl
exit


*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index