In an interview with The News (June, 09) the chairman of the Higher Education Commission (HEC), Dr Javaid Laghari, said that Pakistani universities would be ranked from July 15, 2011. He also said that the categories ‘W’, ‘X’, ‘Y’ and ‘Z’ to classify universities would be reduced to just one i.e. ‘W’. Since many people tend to mix up categories with ranking he made it clear that categories refer to infrastructure only, while rank is determined by a number of criteria. The HEC has elaborated upon these criteria on its website and they include: The number of students at MA, M.Phil and Ph.D levels and how many of them got 60 per cent marks and above. This carries 17 points. Finances, including finances generated by own sources and the amount spent on the library, carry 15 points. The number of books, journals, rankings in games etc. carry 15 points. Faculty, including ratio of Ph.D to non Ph.D faculty, carries 27 points. Research, including Ph.Ds produced, carries 26 points. The total adds up to 100 and the points taken determine a university’s rank.
In the first such exercise, among general universities, Quaid-i-Azam University topped getting 58.16 points, Punjab University followed with 45.92 points and Karachi University was third with 42.01 points. In this list of 116 universities, the last was Jinnah University for Women with a score of 9.24.
Such ranking criteria are used all over the world with reputation, student satisfaction, employability of graduates and other criteria thrown in. These criteria vary a lot which is why there is so much variation in the ranks of universities even in such well-known ranking lists such as the QS World Universities Ranking and the Times Higher Education World University Rankings. In the QS World Ranking, for instance, Cambridge University tops the world but in the Times ranking it is Harvard. Here Cambridge is sixth and Oxford seventh.
The most serious drawback of such criteria, including the one proposed by our HEC, is that it cannot distinguish between excellence and mediocrity. If a university does not produce Ph.Ds because of scruples about quality, it will get a lower score as compared to a university which keeps piling up Ph.D after Ph.D without bothering about quality. As it is easy to get good reports from foreign referees, sending theses to them is not a valid guarantee of quality. Similarly, while 60 per cent marks were hardly ever given in the humanities and social sciences about 40 years back, nowadays average students get scores in the sixties. Substandard universities actually give higher marks than others to enable their students to compete better. This is not a valid criterion unless it is drastically modified taking into consideration median scores and discrepant scores for different subjects. Moreover, technical universities get lucrative projects for the corporate sector and, therefore, higher scores for productivity. They also tend to get more money per student for equipment so it all adds up in their favour. In short, if evaluation is carried out, it should be based on research and its impact alone and not on any other criteria.
Such a project was developed in 2010 in Australia and it is called the High Impact Universities Performance Index. It was a pilot project involving 1,000 universities to begin with. Basically they counted publications and citations to them. The scores were normalised because of variation between subjects. A number of complex statistical calculations were made and what emerged in the end was a rough indicator of how much research — research with impact on the scholarly community — is produced by a university. As one refers to citations, one need not count Ph.D theses which gather dust on shelves since nobody cites them. We also do not count the thousands of substandard books, articles and monographs produced by people who just want to appear to publish to stay in academic jobs. We are then left with a measure, albeit a rough one, on the academic performance of universities. Of course, this, too, would be debatable but it would be a more accurate indicator of a university’s performance as an academic institution than counting how many sports medals it has bagged.
As it happens, the number of Harvard in this list is first with Stanford coming second. The University of California, Berkeley is at number five, Cambridge is 13 while Oxford is 17. While one may argue that the overall atmosphere in Oxford — or for that matter Cambridge — is more conducive to peace of mind or intellectual curiosity or scholarly gratification than at Stanford or any other university higher than them, the point is that the impact of Stanford on the scholarly world in 2010 was higher than that of Oxbridge. But to be fair to Oxbridge, a number of other reports indicate that despite less per capita resources than the top American universities, they have been performing better and better in research in the last 20 years or so. In short, with the amount of expenditure in Oxford and Cambridge taken into account, they are really top performers.
But coming back to Pakistan. My contention is that we, too, should grade universities according to the number of articles published in good journals (for science and technology only in ‘W’ category); books published by a famous commercial or university publisher with a head office in Europe or America; encyclopedia articles and chapters in books edited by famous scholars. This number should be divided by the number of faculty members. The result will indicate the universities score. We should, however, continue to use categories for infrastructure because if that is not done mere degree-doling workshops with two rooms will pose as universities. The aim is to confer the title of a university to a body which teaches nearly all subjects at a high standard. I hope the universities and the HEC will agree to this transparent method of ranking them.
Published in The Express Tribune, June 19th, 2011.