Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
"Carlo Lazzaro" <carlo.lazzaro@tin.it> |

To |
<statalist@hsphsun2.harvard.edu> |

Subject |
st: R: Test for statistical significance - flag Stata 9.2/SE |

Date |
Tue, 31 Aug 2010 18:43:39 +0200 |

Cornelius may want to calculate a bootstrap two-sided p-value instead of rely upon the results of a probably bad-behaving unpaired ttest (I would assume that the manganese serum distribution in humans departs from normality): --------------- code begins------------------------------------------ set obs 100 g Manga_1=uniform()*1 g Manga_2=uniform()*1 ttest Manga_1 == Manga_2, unpaired unequal return list scalar tobs=r(t) replace Manga_1=Manga_1-r(mu_1)+(r(mu_1)+r(mu_2))/2 replace Manga_2=Manga_2-r(mu_2)+(r(mu_1)+r(mu_2))/2 bootstrap r(t), reps(10000) nodots saving(C:\Documents and Settings\carlo\Desktop\Manga_boot.dta, every(1) double replace) : ttest Manga_1 == Manga_2, unpaired unequal use "C:\Documents and Settings\carlo\Desktop\Manga_boot.dta", clear g indicator =abs(_bs_1)>=abs(scalar(tobs)) sum indicator, mean di "p_bootstrap =" r(mean) ---------------code ends------------------------------------------ For further details on bootstrap procedures, the mandatory reference is: Efron B, Tibshirani RJ. An introduction to the bootstrap. NewYork: Chapman and Hall, 1993. HTH and Kind Regards, Carlo -----Messaggio originale----- Da: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Cornelius Nattey Inviato: martedì 31 agosto 2010 16.25 A: statalist@hsphsun2.harvard.edu Oggetto: st: Test for statistical significance Dear All, I have two datasets of manganese blood levels taken in August and November 2005. They were from the same community but mainly different people in the community. There may have been some repeats but I do not know how many if there were any. I have treated them as independent and not paired tests. What test should be applied to see if there is a statistically sig difference? Is there a difference? Thanks very much Cornelius The views expressed in this email are, unless otherwise stated, those of the author and not those of the National Health Laboratory Services or its management. The information in this e-mail is confidential and is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted in reliance on this, is prohibited and may be unlawful. Whilst all reasonable steps are taken to ensure the accuracy and integrity of information and data transmitted electronically and to preserve the confidentiality thereof, no liability or responsibility whatsoever is accepted if information or data is, for whatever reason, corrupted or does not reach its intended destination. * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**References**:**st: Test for statistical significance***From:*"Cornelius Nattey" <cornelius.nattey@nioh.nhls.ac.za>

- Prev by Date:
**Re: st:Command for clustering without sampling** - Next by Date:
**Re: st: estat phtest, detail --> different results using Stata10 and Stata11 if using robust option** - Previous by thread:
**Re: st: Test for statistical significance** - Next by thread:
**Re: st: estat phtest, detail --> different results using Stata10 and Stata11 if using robust option** - Index(es):