The problem with university rankings

HEC gave 40% of its ranking points to research, giving a higher rating to those who played its game during the 2000s.

Nearly 10 years ago, thanks to an extraordinary pile of rejection letters, I had to decide between the two universities that were willing to tolerate my presence. The choice should have been an easy one; the University of Chicago had a greater reputation for academic rigour and outranked Northwestern University by any conceivable metric. Yet, for some ineffable reason, I opted for the latter. At the time it seemed like the most important decision I would ever have to make. Now, the place from which I received my undergraduate education barely merits a single line on my resume.

As the brouhaha over the Higher Education Commission’s (HEC) rankings of Pakistani universities continues, it may be worthwhile to keep in mind that the university one attends plays only a small part in his or her future. But, since rankings are now in vogue, at least they should be done right and that is where the HEC falters.

The biggest problem with the HEC’s rankings is that they seem less like an attempt to rank the best colleges in the country and more of a post-facto rationalisation of the commission’s misguided policies. For some reason, during the Musharraf era, the HEC decided that what the country desperately needed was lots of people getting doctorates — no matter how they were obtained — and a glut of research papers, no matter what their quality was. As with everything else during Musharraf’s era, this was a policy that seemed smart until you actually looked closely at it. The results were predictable. A rash of research papers that no one has ever read and that no one will ever cite were published in journals of disrepute. Fake PhDs and plagiarism proliferated. Now, the HEC has given a full 40 per cent of its ranking points to research, thereby giving a higher rating to those universities which played the HEC’s game during the 2000s.


If the HEC had not given such inordinate attention to research, its rankings would be severely limited. Upon studying the HEC’s rankings system more closely, there are many important factors that are missing. We do not learn, for instance, anything about the students at these universities, but rather, only about those who teach them. Knowing what percentage of students graduate from a university and how many of those graduates get employment should also be a part of the basic minimum criterion for university rankings. Equally useful would be information on how many students are enrolled at the university on a scholarship, since one of the prime functions of a university should be to facilitate social mobility. In the HEC’s defence, it is possible that such data is not collected by universities, but in that case, it should have held off publishing rankings that are at best misleading and at worst completely useless.

It was also disheartening to note that the chairperson of the HEC, Dr Javaid Laghari, in a column published in this paper titled “Ranking universities” (March 1), defended the HEC rankings, rubbishing the ‘pop culture’ — whatever that is — on campuses saying that, “organising musical evenings, marketing shows, career placements fairs, workshops, guest lectures, model UN, etc does not aid in the global ranking of a university.” A university, it should go without saying, is not just a graduate mill and the non-academic facilities it provides need to be a part of its reputation.

The truth is that there is no one set of rankings that would satisfy everyone. Rankings are inherently gimmicky and designed to spur argument rather than illumination. Why the regulator of higher education in the country would decide to enter the debate and that, too, in such a lacklustre manner, is a question that is about as inexplicable as the rankings themselves.

Published in The Express Tribune, March 9th, 2012.
Load Next Story