Loading AI tools
PhD program rankings From Wikipedia, the free encyclopedia
The United States National Research Council had conducted a survey, and compiled a report, on United States Research-Doctorate Programs approximately every 10 years, although the time elapsed between each new ranking had sometimes exceeded 10 years.
Data collection for the most recent report began in June 2006;[1] it was released on September 28, 2010. These rankings did not provide exact ranks for any university or doctoral program; rather, a statistical range was given. This was because "the committee felt strongly that assigning to each program a single number and ranking them accordingly would be misleading, since there are significant uncertainties and variability in any ranking process."[2]
Two series of rankings were offered:
The factors included in these computations included[4] the number of publications per faculty member, citations per publication (except in computer science and the humanities), fraction of the faculty supported by grants and number of grants per faculty member, diversity of the faculty and students, student GRE scores, graduate student funding, number of Ph.D.s and completion percentage, time to degree, academic plans of graduating students, student work space, student health insurance, and student activities.
The rankings have both been praised and criticized by academics.
Physicist Peter Woit stated that historically the NRC rankings have been the "gold standard" for academic department ratings.[5] The rankings were also called "the gold standard" by biomedical engineer John M. Tarbell[6] and in news releases by Cornell University[7] and the University of California.[8] The Center for a Public Anthropology praised the National Research Council's 2010 rankings as "an impressive achievement" for its move away from reputational rankings and toward data-based rankings, but also noted that the lack of specific rankings reduced clarity even as it improved accuracy.[9] William Colglazier and Jeremiah P. Ostriker defended the rankings in the Chronicle of Higher Education,[10] responding to a critique by Stephen M. Stigler.[11]
Sociologist Jonathan R. Cole, one of the members of the NRC committee that produced the ranking, critiqued the final result. Cole objected to the committee's choice not to include any "measures of reputational standing or perceived quality" in the survey, which he called "the most significant misguided decision" in the recent study. Cole also critiqued the various statistical inputs and the weight assigned to each.[12] The Computing Research Association and various computer science departments also expressed "serious concerns" about vaguely defined reporting terms leading to inconsistent data, inaccuracies in the data, and the use of bibliometrics from the ISI Web of Knowledge despite its poor coverage of many computer science conferences.[13][14][15][16][17] Geographers A. Shortridge, K. Goldsberry, and K. Weessies found significant undercounts in the data and poor sensitivity to "noise" in the rankings, concluding that "We caution against using the 2010 NRC data or metrics for any assessment-oriented study of research productivity."[18] The rankings were also critiqued by sociologist Fabio Rojas.[19]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.