Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: sign test output
Nick Cox <firstname.lastname@example.org>
Re: st: sign test output
Thu, 17 Jan 2013 14:22:50 +0000
I see no irrationality here.
Different tests answer different questions, but it can still be
interesting and useful to check if they agree. If a t test and a
Mann-Whitney-Wilcoxon test give similar results, then we have
indications that the level of a variable is similar (different) in two
situations, and that's so whether you summarise level by mean or by
median and whether you make stronger assumptions or weaker assumptions
about the generating process. As in a court of law, if two different
witnesses agree, that is better than having just one witness.
I mention Mann-W-W here because I am guessing that it is likely to be
more useful to you than a sign test, using which is typically a
gesture of minimal confidence in the quality of the data.
It's a long, hard road, but experience teaches you to "sit loose" to
these tests and never to treat them as oracles transmitting truths.
There would be precisely one correct test if and only if the
statistical question corresponded exactly with the research question
and the researcher had total confidence that the associated
assumptions corresponded exactly with what's happened to generate the
real data, but I've never met that so far. Literature that implies
otherwise is either naive or disingenuous, even though there is plenty
On Thu, Jan 17, 2013 at 1:13 PM, Nahla Betelmal <email@example.com> wrote:
> Again, thank you both for your comments.
> However, if normality test is proved to be useful only for huge sample
> as Maarten mentioned. How can we determine which test (i.e parametric
> or non-parametric ) to be used for smaller sample size in hundreds?!
> I personally think it is irrational to run both t-test and sign test
> on the same sample and hope they both produce the same conclusion! and
> what if they dont!
> I will follow Nick's advise to look deeper in the data, but I still
> believe that there must be another way to give obvious solution to
> this situation.
> Thank you both again, I highly appreciate your kind help and time,
> On 17 January 2013 12:22, Nick Cox <firstname.lastname@example.org> wrote:
>> The row boat [English English: rowing boat] joke is as least as old as
>> a comment in
>> Box. G. E. P. 1953. Non-normality and tests on variances. Biometrika 40: 318-35
>> which is otherwise germane to the discussion in several ways, not
>> least in introducing the term "robustness".
>> On Thu, Jan 17, 2013 at 12:14 PM, Maarten Buis <email@example.com> wrote:
>>> On Thu, Jan 17, 2013 at 11:21 AM, Nahla Betelmal wrote:
>>>> from my readings in statistics , I know that in order to decide
>>>> whether to use parametric or non-parametric tests, the data normality
>>>> distribution should be checked first.
>>>> Shapiro-Wilk is used to test normality, when the number of
>>>> observations is less than 30. Otherwise, we should use
>>>> Kolmogorov-Smirnov for large sample (as in my sample).
>>> Unfortunately that is incorrect. Normality tests need huge samples
>>> before the p-value means what it is supposed to mean. An analogy I
>>> have heard in a different context, but which applies to this situation
>>> very well is: to go out to sea in a row boat to check whether the sea
>>> is safe for the QE II.
* For searches and help try: