Bookmark and Share

Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: NRC rankings


From   Stas Kolenikov <skolenik@gmail.com>
To   statalist@hsphsun2.harvard.edu
Subject   Re: st: NRC rankings
Date   Thu, 14 Oct 2010 10:26:19 -0500

On Thu, Oct 14, 2010 at 9:35 AM, Nick Cox <n.j.cox@durham.ac.uk> wrote:
> My point, part facetious and part utterly serious, was just another reminder that this is an international list and that parochial details need explanation. I doubt that many list members inside the USA do _not_ know what you meant by NRC -- now explained -- but some outside may well not know.

OK, more background then, for those who are interested.

The National Research Council is an organization that monitors what's
going on in science, engineering, technology and health in the United
States, in order to provide science-based advice to the upper echelons
of the US government. A small part of that work is providing the
rankings of the US departments in order to compare quality and
characteristics of the doctorate programs. This was a five-year
project started in 2005. The previous rankings were released around
1995, and Joe Newton's page (I am not going to explain who Joe Newton
is, if that's OK with NJC) is probably the most informative source
currently on the web regarding them. So the new rankings release 15
years later is quite a big deal for US academia, and of course most
academicians are unhappy about the data, the methodology and the
results. There are 20-something variables on which the rankings were
based, and most of them, I believe, can be found around. This time,
the rankings came out extremely ambiguous, as there is no single
"score" or 1-2-3-...-999-1000 ranking (in which, of course, everybody
would understand that there is top 5, top 20, top 100, and the rest of
the programs, rather than attributing significant differences between
the programs ranked 43 and 44). On top of that, the added a measure of
uncertainty via half-sample simulations:

postfile topost int(simulation Alabama Arizona ... Wyoming Yale) using
simrankings
foreach r=1/500 {
   preserve
   sample 50
   ComeUpWithWeights
   restore
   ApplyWeightsToRankEverybody
   post topost (`r') (ranking results)
}
postclose topost
use simrankings
reshape long ranking, i(institution_name) j( simulation_number )
bysort institution_name (ranking) : generate lowrank = rank[_N*0.95]
bysort institution_name (ranking) : generate highrank = rank[_N*0.05]

I would say this procedure provides inference with respect to a wrong
probability space. It addresses sampling variability, but here we were
supposed to have a census of programs. Instead, the measurement error
in the rankings should have been addressed.

The statisticians on the committee were from National Opinion Research
Center and Educational Testing Service, one of the three largest
survey organizations and the largest provider of the standardized
tests, respectively. I guess the idea of using the half-samples came
from ETS: as nearly a monopolist on the market, they often run the
liberty of inventing a new methodology with unknown properties.

Complains about flaws of both methodology and the source data began
pouring in. In statistics in particular, we already see odd coverage
of biostat programs, and we have not even began looking at the guts of
the procedure.

As I said, there are many other ranking systems floating around.
Marcello Pagano mentioned US News and World report rankings of US
institutions, which are, to my understanding, are designed to reflect
desirability of the graduates to the employers in the real world. I
mentioned the British RAE, Research Assessment Enterprise, which aims
at establishing the quality rankings of the departments to which
public funding of research is tied. Another British ranking is the one
by The Times Higher Education which ranks universities across the
globe. Finally, I mentioned the Chinese ranking, of which the full
title is Academic Ranking of World Universities. My understanding is
that this ranking tracks the prestige of universities around the globe
in natural sciences, and the internal aim was to see whether the
Chinese universities are internationally competitive.

Some disciplines have their internal rankings. Economics does: once in
five to ten years, papers appear in the general-interest journals
describing the publication performance of various departments. I think
the most recent and the better known one is Tome Coupe's rankings,
http://ftp.vub.ac.be/~tcoupe/ranking.html, covering 1990-2000. There
also has been another College Station, TX, inspired ranking (on top of
Joe Newton's work, I mean) by Badi Baltagi, then Texas A&M Economics
faculty, who provided the rankings of econometrics programs and
individual faculty, also about ten years ago.

-- 
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2014 StataCorp LP   |   Terms of use   |   Privacy   |   Contact us   |   Site index