Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: RE: need mini tutorial in ttest result reading


From   "Nick Cox" <[email protected]>
To   <[email protected]>
Subject   st: RE: need mini tutorial in ttest result reading
Date   Mon, 5 May 2003 17:33:02 +0100

Ada Ma
> 
> I've been staring at the results for days and read the 
> related part of the 
> Stata manual 10 times or more and I'm still confused.  I 
> want to find out 
> whether the results tell me that I can reject these hypostheses:
> 
> H0: mean(he) = mean(hexo)  [HA: mean(he)~=mean(hexo)]
> H0: sd(he) = sd(hexo)      [HA: sd(he)~=sd(hexo)]
> 
> Does the following results tell me that I can can reject 
> both HA and thus 
> accept H0?  Is it that the larger is the P figure, the more 
> likely I'll have 
> to accept HA?
> 
> 
> . ttest LFShe=LFShePOT if regwk==1, unpaired
> 
> Two-sample t test with equal variances
> 
> ------------------------------------------------------------
> ------------------
> Variable |     Obs        Mean    Std. Err.   Std. Dev.   
> [95% Conf. 
> Interval]
> ---------+--------------------------------------------------
> ------------------
>    LFShe |     560    9.520129    .2146156    5.078732    
> 9.098578    
> 9.941681
> LFShePOT |     560    8.862436    .2016839    4.772713    
> 8.466285    
> 9.258587
> ---------+--------------------------------------------------
> ------------------
> combined |    1120    9.191283    .1475172     4.93687    
> 8.901841    
> 9.480724
> ---------+--------------------------------------------------
> ------------------
>     diff |            .6576928    .2945102                
> .0798378    
> 1.235548
> ------------------------------------------------------------
> ------------------
> Degrees of freedom: 1118
> 
>                  Ho: mean(LFShe) - mean(LFShePOT) = diff = 0
> 
>      Ha: diff < 0               Ha: diff ~= 0              
> Ha: diff > 0
>        t =   2.2332                t =   2.2332             
>  t =   2.2332
>    P < t =   0.9871          P > |t| =   0.0257          P 
> > t =   0.0129
> . sdtest LFShe=LFShePOT if regwk==1
> 
> Variance ratio test
> 
> ------------------------------------------------------------
> ------------------
> Variable |     Obs        Mean    Std. Err.   Std. Dev.   
> [95% Conf. 
> Interval]
> ---------+--------------------------------------------------
> ------------------
>    LFShe |     560    9.520129    .2146156    5.078732    
> 9.098578    
> 9.941681
> LFShePOT |     560    8.862436    .2016839    4.772713    
> 8.466285    
> 9.258587
> ---------+--------------------------------------------------
> ------------------
> combined |    1120    9.191283    .1475172     4.93687    
> 8.901841    
> 9.480724
> ------------------------------------------------------------
> ------------------
> 
>                         Ho: sd(LFShe) = sd(LFShePOT)
> 
>              F(559,559) observed   = F_obs           =    1.132
>              F(559,559) lower tail = F_L   = 1/F_obs =    0.883
>              F(559,559) upper tail = F_U   = F_obs   =    1.132
> 
>    Ha: sd(1) < sd(2)         Ha: sd(1) ~= sd(2)          
> Ha: sd(1) > sd(2)
>   P < F_obs = 0.9290     P < F_L + P > F_U = 0.1420     P > 
> F_obs = 0.0710

I am not clear whether you intend some kind of joint test or 
if one test is considered as prerequisite to another. 

Setting that aside, according to many statisticians, 
your question(s) cannot be answered because you 
don't tell us what your alternative hypothesis was 
(e.g.) before you carried out the t test, i.e. 
two-tailed or one-tailed, etc. And according to the 
same conservative view your test is dubious if not 
meaningless without that being sorted out in 
advance. 

However, informally, I imagine most users 
would feel encouraged, if not obliged, to reject the 
null hypothesis of no difference between means and accept 
instead the alternative hypothesis of a positive difference, on this 
evidence, and at a level of 0.05. Clearly, if you use a different 
threshold, the decision may vary (notably, one of 0.01). 

This year marks the 50th anniversary of George 
Edward Pelham Box's paper in Biometrika which pointed 
out that the t test for means is in 
practice much more robust than a test comparing variances. 
He used some more colourful language: if I recall 
correctly, he compared the common practice of F test 
before t test to putting out 
to sea in a dinghy to see if it was safe for an 
ocean liner to leave port. 

Having said that, 

1. Looking at confidence intervals is often preferable. 

2. I'd still look at a graph to compare the 
whole of the distributions, e.g. -qqplot-.

Not what you asked, but possibly relevant 
to the scientific problem which presumably 
underlies this. 

Nick 
[email protected] 

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index