The AI Governance Quagmire

AI governance is urgent and complex; Pakistan must build local capacity to stay sovereign and competitive globally.

Partner Content

The leap from internet governance to artificial-intelligence governance has been abrupt and dizzying. In what feels like a single heartbeat, we went from a world negotiating the privacy implications of social media to one scrambling to regulate algorithms that can write code, diagnose disease, and sway public opinion. The speed of AI’s ascent, alongside warnings of everything from human extinction to massive job losses has left policymakers, technologists, and citizens grasping for steady and reliable ground.

Also, consider the frenzy of “instant experts” who surfaced as soon as large language models like ChatGPT captured headlines. Overnight, everyone seemed to have a hot take or a consulting package on AI governance and ethics. Yet when everyone claims expertise, the term loses meaning. Ironically, this sudden noise has renewed appreciation for the engineers, technologists and scientists quietly making these systems function. But functioning systems are not enough.

Our real priority must be systems that work for us - for societies, workers, and democratic values - and not merely systems that work. AI governance has moved with exceptional urgency because AI adoption has been far swifter than the internet’s early growth. ChatGPT, for example, reached one million users in just five days, a record-breaking uptake, when compared to other technologies. Unlike the relatively gradual rollout of web infrastructure, AI tools can scale worldwide almost instantly, forcing regulators to play catch-up and falter quite a bit.

The European Union has emerged as a front-runner in AI governance. Its landmark EU AI Act, modeled in spirit on the General Data Protection Regulation (GDPR) with shared emphasis on fundamental rights and transparency, classifies AI systems by risk, from minimal to high risk, and imposes corresponding safeguards. The framework aims to protect citizens’ rights without choking innovation, a delicate balance other nations are now studying closely.

But there is also criticism on how can innovation 'not' be choked in such a highly regulated environment. But the regulatory challenge runs deeper than writing statutes. Policymakers must grapple with the entire AI ecosystem and lifecycle: the computational power required to train massive models, the rapid advances in machine capabilities, and the diverse range of end users and most important, their understanding of AI. Each layer presents opportunities for misuse, from biased algorithms to security breaches, and each demands oversight that is both nimble and technically sophisticated.

Pakistan recently released National AI Policy, sparked both criticism and appreciation. Some observers dismissed it outright, while others recognized its attempt to engage with a fast-moving technology. The broader global climate of hyper polarization (where debates over innovation versus privacy echo) makes consensus difficult.

Yet Pakistan cannot afford inaction. Local capacity is critical. Without indigenous funding mechanisms, training programs, and research infrastructure, the country risks depending on big-tech initiatives that primarily serve corporate interests. Too many civil-society actors remain tied to international donors or cushy board positions, gravitating toward safe topics while avoiding the harder work of building sovereign AI capabilities. Developing local datasets, research labs, and funding pipelines is a prerequisite for maintaining control over how AI shapes the Pakistan’s future.

The challenge is hardly unique to Pakistan. In the same week in July 2025, when former U.S. President Donald Trump issued an executive order about woke AI, China hosted its high-profile World Artificial Intelligence Conference which makes us understand how geopolitical rivalries now extend to algorithmic policy. Nations are racing to lead in AI not only for economic gain but also for strategic power. Whoever wins this race will control critical technologies in defense, finance, and information. That makes careful governance urgent. High-security states like Pakistan face particular risks from facial recognition systems and deepfakes, both of which can destabilize democratic processes and amplify disinformation.

Poorly secured data or lax oversight could turn these tools into weapons against citizens. The best defense in this situation is proactive offense: building robust local systems, equipping youth with advanced digital skills, and strengthening governance structures to handle AI’s evolving landscape. Pakistan’s young population is a tremendous asset if provided with the right education and resources. Policies need to invest in STEM education, incentivize homegrown research, and create partnerships that keep intellectual property and the benefits of innovation within national borders.

At the same time, global collaboration remains essential. AI does not respect borders, and effective governance will require coordination on standards for transparency, data privacy, and risk assessment. But collaboration should not mean dependency. For Pakistan and many other nations, the path forward demands investment in local talent, creation of resilient institutions, and participation in global standard-setting. Anything less risks ceding technological sovereignty to the highest bidder.

The AI governance quagmire is daunting, but with deliberate action, it can also be an opportunity to shape a technology that can serve humanity well rather than the other way around.

WRITTEN BY: Gulalai Khan

The writer is a strategic comms, gender expert and teaches Internet Governance and tech policy at LUMS

The views expressed by the writer and the reader comments do not necassarily reflect the views and policies of the Express Tribune.