Pakistan's AI push needs good governance

.

The writer is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad. He can be reached at: cass.thinkers@casstt.com

Pakistan recently announced its plans to invest $1 billion in Artificial Intelligence (AI) projects by 2030. The country is accelerating the pace of developing and deploying AI on a national level; however, the corresponding governance architecture remains missing. In 2025, Pakistan adopted the National AI Policy with a central aim of developing a national AI ecosystem. The policy charts the plans to develop AI infrastructure, train skilled workforce and provide a conducive environment for AI integration across different sectors including education, public administration, healthcare and agriculture. The policy also highlights the need for transparency, fairness, open-source governance and human centric development of AI. However, it serves more as a statement of vision rather than a clear governance framework.

A policy is aspirational in design as it draws out goals to achieve but a governance framework sets clear mechanisms to achieve those goals. They include binding rules, regulatory authority, enforcement mechanisms and accountability systems. Globally, many countries have articulated their AI goals alongside governance frameworks. For example, the European Union AI Act (2024) - one of the most comprehensive AI regulatory frameworks - serves as a legally binding framework for all EU countries and clearly defines risk categories along with compliance obligations for companies and strict penalties of up to 7% of their global revenue.

Similarly, China's Interim Measures for the Management of Generative AI Services (2023) focus on a state-centric regulatory model. In the US, AI governance focuses on fostering innovation and free markets. It is largely decentralised and carried out through federal agencies, existing laws and executive orders. India has also begun operationalising AI governance under its recently released AI Governance Guidelines of 2025.

In contrast, while Pakistan's National AI Policy (2025) emphasises setting up a 'multi-stakeholder governance framework', raising digital awareness and pursuing funding initiatives, it does not specify any clear mandates, enforcement systems and regulatory jurisdictions. A dedicated AI regulator is also missing in the policy along with a comprehensive legal framework for algorithmic governance and data privacy. Pakistan is moving towards developing and deploying an AI ecosystem without any clearly defined rules.

The absence of an AI governance framework carries real risks for Pakistan. The information ecosystem of Pakistan is already vulnerable to disinformation, misinformation and digital manipulation. The rise of generative AI has further emboldened hyper-realistic fake media, automated propaganda and encouraged highly personalised political messaging at scale. In the absence of clear governing rules, such technologies can impair public perception in Pakistan, especially during elections and national crises.

The absence of AI governance can also undermine Pakistan's prospects of attracting investments in AI. Technology partners and investors are more likely to engage in environments where development goals are supported by clearly defined and enforceable rules. AI development can certainly take place without governance, but such development often leads to long-term instability. Therefore, the cost of late or delayed governance is steep as compared to proactive regulation, much more so in Pakistan where basic digital literacy is scarce.

Pakistan must learn from existing global frameworks for AI governance and establish its own indigenous framework suited to domestic needs. Shifting from policy articulation to regulatory implementation must be a priority, which can be attained by establishing a dedicated AI governing authority with statutory powers. There is also a need for robust data protection legislation to serve a foundational role in any AI ecosystem. Moreover, sector specific regulations are also required, especially for high-risk domains like elections, national security and finance. And as governance frameworks also require human expertise, equipping policymakers and regulators with an appropriate understanding of AI systems and their context specific risks is imperative.

Load Next Story