# RE: st: testing equality of parameter estimates across two models--suest followed by test

 From "Brent Fulton" To Subject RE: st: testing equality of parameter estimates across two models--suest followed by test Date Wed, 25 Jun 2008 15:42:41 -0700

```Hi Tim, thank for your response. I agree the covariance term could matter so
I used your suggested command -matlist e(V)-

The cov([m1a]hispanic, [m1b]hispanic) was 0.12, so standard error of the
difference 0.017 becomes quite small (0.011), making the p-value of 0.0991
make sense.

se of difference: sqrt( (0.347001)^2 + (0.34728)^2 - 2(0.120451))= 0.011

Now I realize the covariance term really mattered, but I don't know quite
how to explain it. The standard error of each estimate was quite wide at
0.347, yet the standard error of the estimate of the difference between
these estimators was only 0.011. The models were very similar; only one
independent variable was added to the second model.

I'd appreciate any intuition on how to explain how the standard error of the
difference is very small relative to the standard error of each parameter
estimate.

Thanks,
Brent

-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Tim Wade
Sent: Wednesday, June 25, 2008 2:38 PM
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: testing equality of parameter estimates across two
models--suest followed by test

Brent,

I've never used suest with survey estimators and I'm not sure how the
F-statistic is obtained.  But I don't think you can rely only on the
standard errors for the coefficients to assess the precision of the
estimator. Unless the two models are independent you need to take into
account the cross-model covariance, which you can list  after you
"suest" your models using "matlist e(V)". If they have a positive
covariance this will reduce the variance of the (a-b) estimate since:
Var(a-b)=Var(a)+Var(b)-2*cov(a, b)

for example:

. sysuse auto.dta
(1978 Automobile Data)

. logistic foreign mpg

Logistic regression                               Number of obs   =
74
LR chi2(1)      =
11.49
Prob > chi2     =
0.0007
Log likelihood =  -39.28864                       Pseudo R2       =
0.1276

----------------------------------------------------------------------------
--
foreign | Odds Ratio   Std. Err.      z    P>|z|     [95% Conf.
Interval]
-------------+--------------------------------------------------------------
--
mpg |   1.173232   .0616972     3.04   0.002     1.058331
1.300608
----------------------------------------------------------------------------
--

. est store m1

Logistic regression                               Number of obs   =
74
LR chi2(2)      =
13.39
Prob > chi2     =
0.0012
Log likelihood =  -38.34058                       Pseudo R2       =
0.1486

----------------------------------------------------------------------------
--
foreign | Odds Ratio   Std. Err.      z    P>|z|     [95% Conf.
Interval]
-------------+--------------------------------------------------------------
--
mpg |   1.139888   .0623668     2.39   0.017     1.023977
1.268919
headroom |    .598804    .227678    -1.35   0.177     .2842103
1.261623
----------------------------------------------------------------------------
--

. est store m2

. suest m1 m2

Simultaneous results for m1, m2

Number of obs   =
74

----------------------------------------------------------------------------
--
|               Robust
|      Coef.   Std. Err.      z    P>|z|     [95% Conf.
Interval]
-------------+--------------------------------------------------------------
--
m1           |
mpg |   .1597621   .0514512     3.11   0.002     .0589197
.2606046
_cons |  -4.378866   1.181215    -3.71   0.000    -6.694005
-2.063727
-------------+--------------------------------------------------------------
--
m2           |
mpg |   .1309298   .0546012     2.40   0.016     .0239135
.2379461
headroom |   -.512821   .3168964    -1.62   0.106    -1.133927
.1082846
_cons |  -2.284788   1.838875    -1.24   0.214    -5.888917
1.319342
----------------------------------------------------------------------------
--

. matlist e(V)

| m1                   | m2
|       mpg      _cons |       mpg   headroom      _cons
-------------+----------------------+---------------------------------
m1           |                      |
mpg |  .0026472            |
_cons | -.0590473   1.395269 |
-------------+----------------------+---------------------------------
m2           |                      |
mpg |  .0026541  -.0605161 |  .0029813
headroom |  .0012744  -.0545147 |  .0064742   .1004233
_cons | -.0627595   1.580435 | -.0860442  -.4539693   3.381462

. test [m1]mpg=[m2]mpg

( 1)  [m1]mpg - [m2]mpg = 0

chi2(  1) =    2.60
Prob > chi2 =    0.1071

. scalar num=0.1597621-0.1309298

. scalar denom=0.0026472+0.0029813-(2*0.0026541)

. di chi2tail(1, ((num)/sqrt(denom))^2)
.10717545

Hope this helps, Tim

On Wed, Jun 25, 2008 at 2:51 PM, Brent Fulton <fultonb@berkeley.edu> wrote:
> Hi,
>
> I am using Stata 9.2 and wanted to test whether the parameter estimate for
> the variable "hispanic" was statistically different in model 1 as compared
> to model 2. I assume that -suest- followed by -test- is the appropriate
> procedure, but it is giving me puzzling results. How is it possible when
the
> estimates only differ by 0.017 that the p-value of the Wald test is
> remarkably low at 0.0991? It must be the case that the standard error of
the
> 0.017 statistic is very low, but I'm not sure how that's possible given
that
> the standard errors of the parameter estimates in both models were 0.347.
>
>
> code:
> svy, subpop(if sample1==1): logit adhd_d \$ind1a `geodummies'
> estimates store m1a
> svy, subpop(if sample1==1): logit adhd_d \$ind1b `geodummies'
> estimates store m1b
> suest m1a m1b, svy
> test [m1a]hispanic=[m1b]hispanic
>
> output:
> m1a--hispanic parameter, (se), (p-value): -0.310 (0.347) (0.372)
> m1b--hispanic parameter, (se), (p-value): -0.327 (0.347) (0.346)
>
> Adjusted Wald test (testing equality of hispanic parameter estimates)
> F(  1, 78762) =    2.72
> Prob > F =    0.0991
>
>
> Thanks,
> Brent Fulton
>
> p.s. note that is a similar question to the link below, but no answer was
> posted
> http://www.stata.com/statalist/archive/2007-10/msg00870.html
>
> *
> *   For searches and help try:
> *   http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
```