Statalist The Stata Listserver

[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re:Re: st: RE: statistical question: best summary measure for a 5-point Likert scale

From   "Clyde Schechter" <>
Subject   Re:Re: st: RE: statistical question: best summary measure for a 5-point Likert scale
Date   Thu, 15 Jun 2006 10:36:59 -0400 (EDT)

Regarding the use of the term Likert scale, I was taught (though I have
never bothered to check whether it may be apocryphal) that Likert's
contribution was to verify, or perhaps merely assert that when assessing
opinions, the use of a 5 point scale, *specifically labelled Strongly
Agree, Agree, Neither Agree nor Disagree, Disagree, and Strongly Agree*
provides a response scale that is "psychologically evenly spaced."  If
that claim is correct, then the use of means and other statistics suitable
for interval-level measures would be appropriate.

Not being a psychologist, nor having ever looked into the matter myself, I
don't know what evidence supports that claim, nor, frankly,  can Iimagine
how one would adduce evidence for or against that proposition.  Be that as
it may, this is what I was taught.  I was, at the same time told, that the
use of the term "Likert scale" to refer to any other response set (except
an analogously constructed 7 point scale) is a misuse of the term.

As for the statistical issues regarding grading, reducing any variable
measured with error to a small number of categories necessarily entails
discarding information and some misclassification will result.  The more
misclassification, the less useful the categorization is.  The situation
is even worse when the actual range of abilities being measured is rather
narrow (as should be the case with highly selected groups such as medical
students): in that case the standard error of measurement may be very
large relative to the standard deviation of true ability.  This will
result in classifications that are more or less meaningless.

But ultimately, this boils down to the quality of the measure itself, and
not whether it is based on a 5-point ordinal scale.  That, in turn,
depends on the wording of the items inquiring about each competency, and
the skill and training of the raters.  Indeed, with rulers made of rigid
material, but calibrated only to the inch, and with a sufficient number of
well trained, unbiased observers, one could in fact measure the length of
a line segment to the nearest .01 inch.  On the other hand, rubber rulers
calibrated to .0001 inch and sloppy observers will scarcely get you to an
accuracy of 0.5 inch!

My experience is that in the medical school context, the quality of the
rating instruments is poor: criteria are often vaguely, inconsistently, or
confusingly stated.  And typically the raters are offerred little or no
training in how to evaluate student performance.  So I would tend to agree
with the conclusion that the grading process creates "distinctions without
a difference" (or, otherwise put, is more of a lottery than an
evaluation), but for different reasons than those offered earlier in this

An entirely separate issue is the legitimacy of averaging across
competencies.  To the extent that evaluation is intended to support
feedback for improvement, this, or any type of summary measure, is not
useful: one needs to learn where ones strengths and weaknesses are.

On the other hand, if the "competencies" being measured are strongly
correlated with each other, then the average (or, better, an appropriately
weighted average) may be a better measure of their common content than any
of the individual measures.  If that common content is, indeed, at the
core of being a good physician (or whatever), then basing evaluation on
that average would be sensible.  But, once again, the issue here is not
about choice of particular calculations so much as the substance of just
what is being evaluated, and the interrelationships, if any, among the
competencies being measured.

Clyde Schechter
Associate Professor of Family & Social Medicine
Albert Einstein College of Medicine
Bronx, NY, USA

*   For searches and help try:

© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index