Don’t take too much notice of university rankings, they’re flawed

The problem now is universities require their faculty to publish in a way that enhances their ranking


Asghar Qadir July 22, 2016
PHOTO: Uzair Qadri/flickr

For the last 20 years or so there has been an increasing emphasis on using numerical indicators to measure academic excellence in countries. Initially these indicators seemed innocuous and even marginally useful, but only if used cautiously.

Initially, the Journal Citation Reports (JCR) of the Institute for Scientific Information (ISI) started with a warning that the indices it gave – mainly the so-called Impact Factor (IF), a measure reflecting the yearly average number of citations to recent articles published in that journal – would be helpful in selecting journals for libraries and should not be used to replace peer review (used to evaluate scientists). However, over time, the practice of not only evaluating scientists but also their work has gained popularity in developing nations. The ISI stopped issuing the warning and instead encouraged the use of those indices for all evaluation purposes. Now these indices have become a serious menace in Pakistan and other developing countries.

KU sets up first centre for training in forensic science

While the method of calculating the index remains the same, the impact is often negative. These indicators are supposed to ‘measure quality’ – an oxymoron. The choice of the indices and the criteria for calculating them are arbitrary, but the numerical values lend them a misleading validity. Since they are very convenient for administrators defending promotion decisions, etcetera, they have come into increasing use. In Pakistan, the ISI and IFs have been introduced in policies for fresh appointments, promotions, awards and for evaluation of research projects. Universities, research institutes, departments and individuals are ranked on the basis of these indices. So the question is: If the indices are not intrinsically useful, why are they used in developed nations?

The use of citations: First World vs Third World

At the European Organisation for Nuclear Research citations are used for shortlisting candidates for post-doctoral positions. From hundreds of applicants they short list 30, who are then divided into six groups and one from each section is selected through citations. But they then rely on peer review and interviews for final selection of six scientists. Now, compare this with developing nations, where final decisions on the appointment of even professors, or giving awards, are made on the basis of IF. Since the indices make a serious difference for a large number of scientists, the decision of where to publish research is distorted. Instead of choosing the most relevant journal there is a tendency to use the one with the highest IF. This has led to a competition among journals to raise their IFs. As one criterion for the IF is rejection rate, Physica Scripta, for instance, has a policy about maintaining a rejection rate which has led to a distortion of editorial decisions. To adapt Darwin, ‘journals red in tooth and claw’ are struggling for survival and this distortion is compounded by the requirement that articles should be in one of the ‘fashionable’ areas. Thus, serious new lines of work may not be published unless they are authored by people whose fame will ensure their citation, blocking the way for important ideas by newcomers.

A cycle of corruption

The importance of indices for careers has not only distorted the behaviour of journals and the vehicle used to communicate research results. To rise in status, a scientist needs a high rate of citations. Thus, unscrupulous scientists (of whom there are many, locally and globally) become editors of journals and recommend publication of only those papers that cite them. The least ethical scientists will cite and rise in status. In the process, not only the original editors, but unethical citers will rise as well and become editors in turn. This non-linear positive feedback selects the most unethical in the new jungle of academia.

Two Pakistani institutes among world's top 800 universities

We again need to explain why these indices seem to make sense in the West but are sources of disaster in developing countries. For this purpose I refer to a different index. It is based on publication in a list of 68 journals selected by the prestigious Nature Publishing Group of journals. In Pakistan, at a total count of 20.6, it put the COMSATS Institute of Information Technology (CIIT) as first with an index of 4.9 and Punjab University second with an index of three. This is touted by the universities as demonstrating the excellence of their research, claiming extra funding and attracting more students.

Now, look at the top country in the global list, the Unites States. The total index is 16,535 and the top 10 universities have indices ranging from 259 to 759. The next is China with 6,305 and the indices of the top 10 institutions range from 106 to 1,311. I stress the numbers as the scatter of values about the mean, which determines their statistical significance, is given by the square root of the count (for the Poisson distribution relevant for such counts). Thus, for the US the scatter would be 129 and for China it would be 79. All top institutions have a much higher count than the scatter and can thus be regarded as (statistically) significantly better than the others. Nevertheless, in the US there is little to choose between the tenth and the fourth on the basis of the scatter, and in China between the tenth and the sixth. By the twentieth country in the list, the ranking of only four or six of the institutions is significant, for the thirtieth only the top two or three and for the fortieth only one or two. When we come to Pakistan, at 44, it is obvious the index of the top university is just about the value of the scatter and hence it has no statistical significance. In the ex-Soviet Union the top slot invariably goes to the Academy of Sciences, as it aggregates all university research, so that Serbia at number 50 (the bottom of the list) has one significant institution. In other places the ranking depends on how institutions are lumped together or merged into a university.

The problem now is universities require their faculty to publish in a way that enhances their ranking. This will crush the spirit of free enquiry and encourage the selection of most unethical members who manage to publish trash, or a scientist who may be one of hundreds of authors collectively publishing many papers in good journals.

When there are only fluctuations being measured it becomes an exercise in counting each strand of hair on bald heads shaved sporadically to determine who is less bald. With this approach, no new ideas are likely to emerge from developing nations and we will be doomed to follow western leadership forever.

Dr Asghar Qadir is a professor emeritus at NUST, a distinguished national professor and fellow of the Pakistan academy of Sciences.

 

COMMENTS (5)

NonIndianDude | 7 years ago | Reply No, Indians outpace pakistanis in IT. There prime minister is himself an IT expert, if you know what I mean.
IndianDude | 7 years ago | Reply ....Don’t take too much notice of university rankings, they’re flawed.. I wholeheartedly agree! The low or non-existential rating of Pakistani university is a conspiracy. All the pakistanis should get their education in pakistan and should not got to west or australia for education, especially IT. Pakistan is world known for its IT graduates.
VIEW MORE COMMENTS
Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ