Nobody likes, everybody uses: university ranking I

A recurring theme in our complex world pertain to the question of whether it is possible to summarize the performance of an organization faithfully with a single score? Universities, colleges and schools are complex social organizations that serve a variety of purposes, and measuring their performance is obviously delicate. What does it really mean if we say that this university is a 27th and the other one in the state the 42th ? How these numbers influence the big stakeholders of the college ranking game, students and their parents, admissions offices, and college administrators?
While University ranking became an obsession in this century, it is not a strong exaggeration to state, that everybody criticizes and simultaneously uses rankings. Ranking is and remains with us, so the best thing we can do is to understand the rules of the game. We should remember the lesson hopefully learned so far: ranking reflects the mixture of the reality and illusion of objectivity, and also subject of manipulation.

A little history

While our obsession to ranking is relatively new there are even early precursors of university. In an isolated, pioneering work published in 1863, a Czech professor of the Prague Polytechnical Institute, Carl Koristka, analyzed and compared the technical universities of the leading European countries118. The university called today Karlsruhe Institute of Technology, and known as one of the Germany leading engineering schools had the largest number (about 800) students and 50 professors. If we could believe in the numbers, the students/faculty number has been reduced to sixteen to five, since nowadays the 25; 000 students are served by 6000 academic staff. It is interesting to see that while in Karlsruhe the number of foreign students was about sixty
percent, Berlin had only two (!!) percent (seven from 374). The interval for the students/faculty ratio for the institutions for which Koristka found reliable data, was between eight and eighteen. (Koristka himself did not use the students/faculty ratio, probably it was not in the focus of attention).

James McKeen Cattell (1860-1944) was a pioneer professor in the United States, who contributed very much to the transformation of psychology from pseudoscience to legitimate science by adopting both experimental and quantitative methods. He was motivated among others by Francis Galton, who, as we remember, liked to count and measure everything. Cattel was inspired to study distinguished men of science. He requested a number of competent men in each field to rate their colleagues, or more precisely to denote the excellence with stars. Institutions, characterized by the ratio of starred scientists to the total number of faculty, were ranked. Cattel’s aim was to provide help both to potential students, and the institutions. The first edition of the American Men of Science was published in 1906, and the seventh in 1944 119. Cattel’s approach
suggested that the quality of the universities can be measured by the number of excellent faculty, and it determined our way of thinking about university ranking. The importance of ”distinguished persons” in the ranking procedure ensured the priority of the older private institutions over the newer public universities. The other early ranking systems added several more criteria. Graduate success in later life is an output measure of teaching quality, while student/faculty ratio and volumes in the library is an input measure of the resources 120.
Symbolically our modern obsession with university ranking is represented with the appearance of the US News and Report (USNWR in 1983. Mass media entered the scene. USNWR simultaneously wished to provide accessible information for students and parents, as well as increase the visibility and revenue of the magazine. Soon it turned out that the ranking is a measure of reputation, and college administrators (not necessarily admittedly) made an explicit target to rise in the U.S. News Ranking. The reputation race shifted the gear.
Now USNWR discriminates between ranks for best quality as well as for best value. USNWR calculates best value by weighting quality with 60 percent of the overall score; the percentage of receiving need-based grants accounted for 25 percent; and the average discount accounted for 15 percent. USNWR changed its methodology as a response for criticism, and combines more objective input data (resources, entering student quality) and subjective aspects of reputation. However, it is difficult to enter a race, if the rules are changing. While the US (and British) ranking systems were followed by the emergence of many national ranking systems, the race became much more exciting by the appearance of the global ranking. The three most influential global rankings are those produced by Shanghai Ranking Consultancy (the Academic Ranking of
World Universities; ARWU), Times Higher Education (THE), and Quacquarelli Symonds (QS). They measure rather research and not the teaching performance


One thought on “Nobody likes, everybody uses: university ranking I”

  1. figyelemre méltó népi értékelés a XXI század tudományáról

    Dr. Peter Bruck Budapest, Hungary +36-20-914-6954

    Péter Érdi: RANKING. The Unwritten Rules of the Social Game We All Play ezt írta (időpont: 2018. aug. 24., P, 17:17):

    > aboutranking posted: “A recurring theme in our complex world pertain to > the question of whether it is possible to summarize the performance of an > organization faithfully with a single score? Universities, colleges and > schools are complex social organizations that serve a varie” >


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s