The grand research experiment
The HEC's vision shouldn’t be that of a cop trying to catch those who fabricate their degrees
Recently, a number of pertinent questions have been asked about the state of higher education in Pakistan. In particular, the academic community has started to wonder about the grand Higher Education Commission (HEC) experiment. While it is impossible to measure the returns on the investment on HEC in a short period of time, careful and ongoing analysis is always required to ensure that the system continues to move forward in the right direction. Indeed, a decade may not be enough time to critically analyse the impact of higher education on society, but that should not stop us from challenging policies that are detrimental to both the culture of higher education and the impact it is supposed to make.
Those who are ardent supporters of the HEC often point to analytical metrics like the number of PhDs produced, the number of publications and the rankings, and so on. While sometimes useful, these metrics can often be flawed and do not necessarily paint an accurate picture of the state of the system. First, metrics are often transient and remain silent on the long-term success of the system. A university going up in the so-called rankings one year and coming down in the next should not automatically translate into a catastrophe, or lack of activity or effort. However, looking at rankings alone and using these as the sole metric of success would reflect this myopia.
The argument about increasing journal publications is also fundamentally flawed. Just because Pakistan now produces a lot more research papers than pre-HEC days is not success in and of itself. What if those papers are of questionable quality or do not push our boundaries of intellect? With the surge in the number of global journals and questionable reviewing criteria, publishing itself is hardly the metric of success. Just to be clear, I am not even talking about the issue of plagiarism here. I am worried about the quality of the papers that are not plagiarised. Is the research done, broadly speaking, creating new areas of intellectual development and growth? Are we discovering fundamentally new phenomenon? The financial ‘reward-based’ system, where an individual publication leads to a bonus, is also fundamentally flawed and leads to ‘production mills’ of papers. A ‘tiered-system’ where a publication in a high-impact factor journal leads to a higher reward is also questionable, since it implies that five or 10 (or some other number of) papers in low-quality journals are equal to one paper in a high-quality journal. No quantity, not even in hundreds or thousands, of poor quality science is equal to good quality science.
The number of PhDs produced is also not a metric of success. How do we know which of these PhDs are doing high quality work themselves, and not dependent on both the ideas and resources of their training labs abroad? In my personal interactions, most (though not all) of those who are returning from abroad are unable to launch their independent research careers with fresh ideas and methods and often simply repeat what they have already done. While collaboration is certainly a good thing, lack of independent thought is hardly the way to go.
This problem of quality is particularly pervasive in natural sciences and engineering, since they have received the lion’s share of funding, often to the detriment of high quality work in the social sciences and humanities. The lack of support in these areas has meant that essential work evaluating our history, society and socioeconomic outlook has been compromised at the altar of sub-par science.
Despite these reservations, I fundamentally believe that the HEC experiment is an important and essential one. A culture of inquiry and research is desperately needed, but so is evaluation of the policies that are designed to foster this culture. Metrics that neither reflect quality nor encourage rigour are not only flawed, they also reward poor behaviour.
The vision of the HEC should not be that of a bank or a cop trying to catch those who fabricated their degrees — but an organisation that promotes first and foremost quality and rigour, and has its eyes set on fostering a culture of inquiry, innovation and insight.
Published in The Express Tribune, December 1st, 2015.
Those who are ardent supporters of the HEC often point to analytical metrics like the number of PhDs produced, the number of publications and the rankings, and so on. While sometimes useful, these metrics can often be flawed and do not necessarily paint an accurate picture of the state of the system. First, metrics are often transient and remain silent on the long-term success of the system. A university going up in the so-called rankings one year and coming down in the next should not automatically translate into a catastrophe, or lack of activity or effort. However, looking at rankings alone and using these as the sole metric of success would reflect this myopia.
The argument about increasing journal publications is also fundamentally flawed. Just because Pakistan now produces a lot more research papers than pre-HEC days is not success in and of itself. What if those papers are of questionable quality or do not push our boundaries of intellect? With the surge in the number of global journals and questionable reviewing criteria, publishing itself is hardly the metric of success. Just to be clear, I am not even talking about the issue of plagiarism here. I am worried about the quality of the papers that are not plagiarised. Is the research done, broadly speaking, creating new areas of intellectual development and growth? Are we discovering fundamentally new phenomenon? The financial ‘reward-based’ system, where an individual publication leads to a bonus, is also fundamentally flawed and leads to ‘production mills’ of papers. A ‘tiered-system’ where a publication in a high-impact factor journal leads to a higher reward is also questionable, since it implies that five or 10 (or some other number of) papers in low-quality journals are equal to one paper in a high-quality journal. No quantity, not even in hundreds or thousands, of poor quality science is equal to good quality science.
The number of PhDs produced is also not a metric of success. How do we know which of these PhDs are doing high quality work themselves, and not dependent on both the ideas and resources of their training labs abroad? In my personal interactions, most (though not all) of those who are returning from abroad are unable to launch their independent research careers with fresh ideas and methods and often simply repeat what they have already done. While collaboration is certainly a good thing, lack of independent thought is hardly the way to go.
This problem of quality is particularly pervasive in natural sciences and engineering, since they have received the lion’s share of funding, often to the detriment of high quality work in the social sciences and humanities. The lack of support in these areas has meant that essential work evaluating our history, society and socioeconomic outlook has been compromised at the altar of sub-par science.
Despite these reservations, I fundamentally believe that the HEC experiment is an important and essential one. A culture of inquiry and research is desperately needed, but so is evaluation of the policies that are designed to foster this culture. Metrics that neither reflect quality nor encourage rigour are not only flawed, they also reward poor behaviour.
The vision of the HEC should not be that of a bank or a cop trying to catch those who fabricated their degrees — but an organisation that promotes first and foremost quality and rigour, and has its eyes set on fostering a culture of inquiry, innovation and insight.
Published in The Express Tribune, December 1st, 2015.